Can I determine a Sharepoint File's path in Microsoft Logic Apps? - azure-logic-apps

I am writing a logic app that responds to files that violate a specific policy. If no response to the violation is given, the file must be moved from wherever it is in Sharepoint to a specified location. I know that all the files will be in the same domain, however the precise filepath is not included in the alert details from Azure Sentinel, where the alerts are grabbed. The information comes from Microsoft Defender for Cloud Apps (formerly MCAS) and while the file path is found there, that data doesn't seem to be able to be transferred into the logic app through their api. Is there a way to determine a file's path in logic app without this data?

Related

Only allow S3 bucket access to authenticated users from specified domain

I'm currently building a react project, which is embeds small, static applications that are uploaded to an S3 bucket.
Those applications all are built using HTML/CSS/Vanilla JS, meaning they all have the same structure with an index.html file as an entry point.
Embedding those applications on the site through an iframe with the source link to the index.html works well, but I now want to make sure only users, who are registered and have the correct access rights have access to a given application.
Originally I wanted to handle this using pre-signed urls, but this doesn't seem to work since I couldn't find a way to use a pre-signed url to get access to all the files in a folder in S3.
I then thought about handling everything in React/Express, by making sure the user is authenticated, has the correct role and only then sending the src link back to the Frontend, where it gets embedded in the iframe. Additional I would add a bucket policy that only allows my specific domain to fetch the resources.
Apparently (from other threads) I saw that it's easy to spoof the HTTP referrer, meaning that if somebody gets the access link to the application on S3 they could simply send an HTTP request with a spoofed referrer and get their hands on the content.
I'm in over my head here and trying to figure out what the best architecture is. If it's something completely removed from the setup I currently have I'm happy to change it all around.
Generally though I would hope for something that just gives me an added layer of security, that makes it impossible to access the content in the S3 bucket unless it's coming directly from one specific host after authenticating there.

Azure logic apps http connector for ADLS is corrupting the zip file

I am using the Azure logic-apps to get the Attachments from an email (outlook) and dump into the Azure Datalake Gen2. I am using the http connector to dump the file into the adls.
Though I am able to dump the file into the datalake but this zip file is getting corrupted.
Previously I had Azure datalake Gen1 so I was using the adls Upload File action to upload the attachment then I didn't face such type of issue.
I am not sure whether I am committing mistake or is there issue with the http connector.
Hence seeking help from the community.
I am also attaching the part of the logic apps flow:
It is always better to use inbuilt connectors in Logic App.
For Azure Data Lake Storage Gen2 (ADLS Gen2) accounts, you can use Azure Blob Storage connector (recommended by Microsoft), while having multi-protocol access. You can read more about this new feature, including the availability and known limitations in this blog.
Known issues and limitations:
The action Extract archive to folder ignores empty files and folders in the archive, they are not extracted to the destination.
The trigger does not fire if a file is added/updated in a subfolder. If it is required to trigger on subfolders, multiple triggers should be created.
Logic apps can't directly access storage accounts that are behind firewalls if they're both in the same region. As a workaround, you can have your logic apps and storage account in different regions. For more information about enabling access from Azure Logic Apps to storage accounts behind firewalls, see the Access storage accounts behind firewalls.
For more information about this, you can visit here.

GWT how to store information on google App Engine?

In my GWT application, a 'root' user upload a specific text file with data and that data should be available to anyone who have access to the app (using GAE).
What's the classic way to store a data that will be available to all users? I don't want to use any database (objectify!?) since this is a relatively small amount of information and it changes from time to time by root.
I was wondering if there was such static MAP on the 'engine level' (not user's session) that this info can be stored (and if the server is down - no bigi, root will upload again)
Thanks
You have three primary options:
Add this file to your /war/ directory and deploy with the app. This is what we typically do with all static files that rarely change (like .css file, images, etc.) This file will be available to all users, whether they are authenticated or not.
Add this file to your /war/WEB-INF/ directory and deploy with the app. This file will be available to your server-side code, so you can read it on the server-side and show to a user. This way you can decide which users can see this file and which users should not have access to it.
Upload this file to Google Cloud Storage. You can do it through an app, or you can simply upload it manually to a bucket using a GCS console or gsutil command-line tool. Then you simply provide a link to your users. The advantage of this option is that you do not have to redeploy your app when a file changes.
The only reason to go with the first two options is to have this file under version control. If you don't need that, I would recommend going with the GCS option.

Display onedrive filelist on my website

Here is the basic concept of what I am trying to do. My web app allows my clients to log in to a dashboard.
One of the things I want to show on their dashboard is THEIR work files.. ie: PDF files.
I store these files in OneDrive in a seperate folder for each client
Root Doc Directory
- Client A
- File1.pdf
- File2.pdf
- Client B
- File1.pdf
etc
so when client A logs in, I want to show all the files in the ClientA folder...
concept sounds simple, and with storage on my own server, I can do this easy, but I cant find how to do it using OneDrive...
Does anyone out there have any ideas?? All the info I have found about OneDrive APIs requires users to actually log into onedrive which I dont want.
Basically you're using OneDrive wrong. You should be asking each user of your service to sign in with their Microsoft Account and store the files in the user's OneDrive. Storing them all in your OneDrive means they can't access those files outside of your app (like by logging into OneDrive). Instead of using Microsoft Account as the security for those files, you're putting all of the security requirements on your own ability to protect access to your OneDrive account. Basically, doing it way you proposed is strongly not recommended.
You can pretty easily integrate OAuth into your website so that a user can connect your site to OneDrive and then have access to their files from OneDrive in your service.
The alternative would be to use something like Azure Blob Storage to store/retrieve these files. Then your app would just have the set of access keys required to access storage and you wouldn't have to deal with signing into a single OneDrive account from the service and keeping the access and refresh tokens up to date.

Creating and serving temporary HTML files in Azure

We have an application that we would like to migrate to Azure for scale. There is one place that concerns me before starting however:
We have a web page that the user is directed to. The code behind on the page goes out to the database and generates an HTML report. The new HTML document is placed in a temporary file along with a bunch of charts and other images. The user is then redirected to this new page.
In Azure, we can never be sure that the user is going to be directed to the same machine for multiple reasons: the Azure load balancer may push the user out to a different machine based on capacity, or the machine may be deprovisioned because of a problem, or whatever.
Because these are only temporary files that get created and deleted very frequently I would optimally like to just point my application's temp directory to some kind of shared drive that all the web roles have read/write access to, and then be able to map a URL to this shared drive. Is that possible? or is this going to be more complicated than I would like?
I can still have every instance write to its own local temp directory as well. It only takes a second or two to feed them so I'm ok with taking the risk of whether that instance goes down during that microsecond. The question in this regard is whether the redirect to the temp HTML file is going to use http 1.1 and maintain the connection to that specific instance.
thanks,
jasen
There are 2 things you might want to look at:
Use Windows Azure Web Sites which supports some kind of distributed filesystem (based on blob storage). So files you store "locally" in your Windows Azure Web Site will be available from each server hosting that Web Site (if you use multiple instances).
Serve the files from Blob Storage. So instead of saving the HTML files locally on each instance (or trying to make users stick to a specific instance), simply store them in Blob Storage and redirect your use there.
Good stuff from #Sandrino. A few more ideas:
Store the resulting html in in-role cache (which can be collocated in your web role instances), and serve the html from cache (shared across all instances)
Take advantage of CDN. You can map a "CDN" folder to the actual edge-cache. So you generate the html in code once, and then it's cached until TTL expiry, when you must generate the content again.
I think azure blob is best place to to store your html files which can be accessed by multiple instances. You can redirect user to that blob content or you can write custom page to render content from blob.

Resources