iPhone App Built on Amazon Web Services - database

I am building an iPhone app that stores user logon credentials in an AWS DynamoDB. In another DynamoDB I am storing locations of files (stored in S3) for that user. What I don't understand is how to make this secure. If I use a Token Vending Machine that gives that application an ID with access to the user DynamoDB, isn't it possible that any user could access the entire DB and just add or delete any information that they desire? They would also be able to access the entire S3 bucket using this setup. Any recommendations on how I could set this up securely and properly?
I am new to user DB management, and any links to helpful resources would be much appreciated.

Regarding S3 and permissions, you may find the answer on the following question useful:
Temporary Credentials Using AWS IAM

IAM permissions are more finegrained than you think. You can allow/disallow specific API calls, so for example you might only allow read operations. You can also allow access to a specific resource only. On S3 this means that you can limit access to a specific file or folder , but dynamodb policies can only be set at the table level.
Personally I wouldn't allow users direct access to dynamodb - I'd have a webservice mediating access to that, although users being able to upload directly to s3 or download straight from s3 is a good thing (Your web service can in general give out pre signed urls for that though)

Related

Amazon S3 for Media Storage for Restful API

I have a REST API that supports a multi-user React App that I have been running on an EC2 instance for a while and I am moving into uploading photos/pdfs that will be attached to specific users. I have been able to accomplish this in testing using EFS which I had provisioned to the same EC2 instance I had the API on but I am looking to move towards S3.
When I was testing it out using EFS, I would send everything through the REST API, the user would do a POST and then the API would store the file in EFS along with the metadata for where the file was stored in my DB, then in order to retrieve the data the user would do a GET to the REST API and the server would fetch the data from EFS based on metadata in the DB.
I am wondering what is the usual use case for S3? Do I still have to send everything through my REST API if I want to be sure that users only have access to the pdfs/images that they are supposed to or is there a way for me to ensure their identity and directly request the resources from S3 on the front-end and just allow my API to return a list of S3 urls for the files?
The particular use case I have in mind is making it so users are able to upload profile pictures and then when a user searches for another user by name, all of the profile pictures of the users returned in the query of users are able to be displayed in the list of results.
As far as I know, there is no "normal" way to deal with this particular situation - either could make sense depending on your needs.
Here are some options:
Option 1
It's possible to safely allow users to access resources directly from S3 by using AWS STS to generate temporary credentials that your users can utilise to access the S3 APIs.
Option 2
If your happy for the pics to be public, you could configure a bucket as a static website and simply use those public URLs in your web application.
Option 3
Use Cloudfront to serve private content from your S3 buckets.

exporting data for analytics use in SaaS

We are a SaaS product and we would like to be able have per-user data exports that will be used with various analytical (BI) tools like Tableau or PowerBI. Instead of just managing all those exports manually, we thought of using some cloud database such as AWS Redshift (which will be part of our service). But then, it is not clear how is user will access those databases naturally, unless we do some kind of SSO integration with AWS.
So - what is the best practice for exporting data for analytics use in SaaS products?
In this case you can build your security in to your backend API layer.
First you can set up processes to load your data to Redshift, then make sure that only your backend API server/cluster has access to redshift (e.g. through a vpc with no external ip access to redshift)
Now you have your data, you can validate your user as usual through your backend service, then when a user requests a download through the backend API, the backend can create a query to extract from redshift only the correct data based upon the users security role. In order to make this possible you may need to build some kind of security column into your redshift data model.
I am assuming getting data to redshift is not a problem.
What you are looking for, if I understand correctly is a OEM solutions.
The problem is how does one mimic the security model you have in place for your SaaS offering.
That depends on how complex is your security model.
If it is as simple as just authenticate the user and he has access to all tenant data or the data can be easily filtered for user. Things are simple for you. Trusted authentication will allow you to authenticate that user and user filtering will allow you to show him all that he has access to.
But here is the kicker, if your security is really complex , then it can become really difficult to mimic it within these products.
Here for integrating tableau this link will help:-
https://tableau.github.io/embedding-playbook/#
Power BI, this product am not a fan off. I tried to embed a view in one my applications and data refresh was a big issue.
Its almost like they want you to be a azure shop for real time reporting.( I like GCP more )
If you create the api's and populate datasets then they have crazy restrictions like 1MB/sec etc.
On the other instances datasets can be refreshed only 8 times.
I gave up on them.
Very recently I got a call from Sisense and they seemed promising as well from a OEM perspective. You might was to try them.

Uploading Images to S3 from front end best practices

I have a static React site that I am using to upload an image to s3 using an identity-pool in cognito. I did this more out of curiosity than anything.
i understand that one way to do this would be to upload my image to my server which could then upload it to s3.
but I want to know if there are any best practices for doing this from the client side directly without a server.
One of my concern in my current approach is that the identity pool id is public. Any feedback is helpful.
This resource Serverless Stack will guide you through exactly what you want to do, it has helped me configure similar "serverless" deployments.
The pattern to do what you are describing is as follows:
Users authenticate with the identity pool, this returns a JWT
Federated Identities returns AWS IAM credentials for valid JWT (note: the Federated Identity must have a Policy that allows the AWS IAM credentials to access S3)
Using AWS IAM credentials, user client can upload image to S3
In regards to the identity pool, your identity pool-id is infact a public resource, just like your web app. Your identity pool is secured by the fact that users must exist in the identity pool and must have the correct password for that user.
If you don't want to expose your id in your client codebase (you shouldn't), what I usually do is set the id as an environment variable in a .env file then use a package called dotenv to expose it to the .js file that needs the id. Then of course include the .env file in your .gitignore to avoid it being tracked in version control.
More info here: https://github.com/motdotla/dotenv
Maybe you can consider similar design as described here: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_cognito.html
A common pattern is to have your server authenticate the user request and return a pre-signed URL. This can then be used to do an upload directly to S3 using standard HTTP methods. See Uploading Objects Using Pre-Signed URLs.
Seems like your question about public identity pool is actually a different question. I'll keep my answer to the bulk of your question, as secure authentication is a whole other topic you should probably research first on your own.

Saving images in Azure storage

I am building a web application , where users can upload images & videos and store them in their account. I want to store these files somewhere and save only the URL in the DB.
What is the right way to do it using Azure services? Is there a dedicated server for this, or some VM?
Yes, there is a dedicated service for this purpose. It is the Azure Blob Storage. And you are highly advised to save all and any user uploaded content to that service instead to the local file system.
The provided link has samples for almost any language that has client SDK provided by microsoft.
If, at the end you use some platform or language that is not directly supported by an SDK, you can always refer to the Blob Storage REST API documentation.
You will need to go through the blob service concepts to get deeper understanding of the service and how to use it.

Google storage for each user

I want to create application with consists of Desktop application and google cloud storage. So, each my client should have separate cloud storage. Does google provide such thing?
More info.
Because I do not know what can offer google app engine i wrote this question.
I need some database hosting for my desktop application. In future I think I will switch to GWT and app engine. I want to sell my application so each my client can't access my other client databases. I was thinking that would be safer if each client will have data in a separate database so I can't do some mistakes in code.
You can separate data in the datastore using namespaces on google app engine:
https://developers.google.com/appengine/docs/java/multitenancy/multitenancy
It's up to you to decide how to implement the namespaces. You can separate them out by your user authentication system.
You can create a folder per client and restrict the folder access to the user (works only with Google Accounts) or your can do the same with buckets, create a bucket per user (which might be an overhead if you have a lot of users).
For database AppEngine datastore has the ability to separate the data by namespaces. this doen't require any user account and its your responsibility to select with which namespace to work with per request.
You can use GAE namespace capability as pointed above by #dragonx without Google authentication.
Use a client name as a namespace identifier (needs to be unique) . How you fetch this client name is upto you. It can be stored in GAE itself if you wish or can be deciphered from the url used specific to a client.
Do have a look at the GAE multitenancy link https://developers.google.com/appengine/docs/java/multitenancy/multitenancy
The example here can be easily adapted to use any string identifier per client.

Resources