I have a static React site that I am using to upload an image to s3 using an identity-pool in cognito. I did this more out of curiosity than anything.
i understand that one way to do this would be to upload my image to my server which could then upload it to s3.
but I want to know if there are any best practices for doing this from the client side directly without a server.
One of my concern in my current approach is that the identity pool id is public. Any feedback is helpful.
This resource Serverless Stack will guide you through exactly what you want to do, it has helped me configure similar "serverless" deployments.
The pattern to do what you are describing is as follows:
Users authenticate with the identity pool, this returns a JWT
Federated Identities returns AWS IAM credentials for valid JWT (note: the Federated Identity must have a Policy that allows the AWS IAM credentials to access S3)
Using AWS IAM credentials, user client can upload image to S3
In regards to the identity pool, your identity pool-id is infact a public resource, just like your web app. Your identity pool is secured by the fact that users must exist in the identity pool and must have the correct password for that user.
If you don't want to expose your id in your client codebase (you shouldn't), what I usually do is set the id as an environment variable in a .env file then use a package called dotenv to expose it to the .js file that needs the id. Then of course include the .env file in your .gitignore to avoid it being tracked in version control.
More info here: https://github.com/motdotla/dotenv
Maybe you can consider similar design as described here: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_cognito.html
A common pattern is to have your server authenticate the user request and return a pre-signed URL. This can then be used to do an upload directly to S3 using standard HTTP methods. See Uploading Objects Using Pre-Signed URLs.
Seems like your question about public identity pool is actually a different question. I'll keep my answer to the bulk of your question, as secure authentication is a whole other topic you should probably research first on your own.
Related
Pub/Sub is really easy to use from my local work station. I set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path to my .json authentication object.
But what if you need to interact with multiple different Pub/Sub projects? This seems like an odd way to do authentication. Shouldn't there be a way to pass the .json object in from the Java code?
How can I use the client libraries without setting the system's environment variable?
Thanks
You can grant access for PubSub in different project using single Service Account and set it as env variable. See PubSub Access Control for more details.
A service account JSON key file allows to identify a service account on GCP. This service account is the equivalent of a user account but for the app (without user, only machine).
Thereby, if your app need to interact with several topic from different projects, you simply have to grant the service account email with the correct role on each topics/projects.
Another important thing. The service account key file are useful for app outside GCP. Else it's bad. Let me explain this. The service account key file is a secret file that you have to keep and store securely. And a file, it's easy to copy it, to send it by email and even to commit it into public git repository... In addition, it's recommended to rotate this key at least every 90 days for security reasons.
So, for all these reasons and the difficulty in security that represent the service account key files, I don't recommend you to use them.
With your local environment, use your user account (simply use the gcloud SDK and perform a gcloud auth application-default auth). Your aren't a machine (I hope!!)
With GCP component, use the component "identity" (load a service account when you deploy a service, create a VM,..., and grant the correct role on this service account without generating JSON key
With external app (other cloud provider, on premise, Apigee, CI/CD pipeline,...), generate a JSON file on your service account, you can't avoid them in this case.
I am using different third party API keys in my reactjs-firestore project. But I can't find a way to secure them in firebase hosting. How can I hide these API keys in firebase hosting?
For example, in Netlify hosting services they provide environment variables feature which can be used to secure the API keys.
that is I can just store the API keys in the variables in netlify and it will be retrieved from there which will be secured.
But in firebase how do I do this?
I can't seem to find a similar setting wherein I can store the keys as environment variables in the hosting services.
if there is no such feature is there another way to secure these API keys?
and for the firebase API keys,
I have already read some answers and understood that firebase API keys will not be hidden.
is there at least some way to secure these firebase API keys to just one secured URL at least? (I know that writing security rules is the best approach but am trying to find other options as well).
I can't seem to find a way to secure firebase project API key usage to one secured URL.
I have tried to find ways to secure the API key but I haven't been successful.
below is how I retrieve data in reactjs code
axios.post(`https://data.retrieval.com/1/data?key=API_KEY`, data)
I am trying to hide the API_KEY in the production code
I want to secure third party API keys in my hosted website.
and also restrict my firebase project API key to just one secure URL.
am not able to do this now.
any suggestions or solutions?
Thank you for trying to help.
and thank you for your time
If you're using the API key in client-side code, there is always the chance that a malicious user can find the key and abuse it. The only way to protect against this is to not use the API key in client-side code, or to have a backend system that can protect access based on something else (such as Firebase's server-side security rules).
Since your backend system likely doesn't have such a security model, you'll typically have to wrap their API in your own middleware that you host in a trusted environment such a server you control, or Cloud Functions. That's then where you ensure all access to the API is authorized, for example by setting up your own security system.
Not sure if this help, but my Firebase Cloud Function use this.
Create your secret by
firebase functions:config:set secret.API_KEY="THE API KEY"
Access your secret by using functions.config().secret.API_KEY
Note: This should only use for server use case, not in the client code. For server I meant Firebase Cloud Function or your backend.
The safe way I've found to store your third-party keys is using the Google Secrets Manager. It is now baked into the Firebase Functions SDK and works very well. You can find the information here, under the section titled "Store and access sensitive configuration information".
Two things worth mentioning:
There is a small bug in the syntax example, they forgot to add the https before onCall.
You'll need to give the service account which runs the cloud function when deployed access to the secrets. Here are the official docs on how to do that. If you are deploying through Firebase, you'll want to look for the service account whose address is [project-name]#appspot.gserviceaccount.com. If you have any doubts about which service account is running the Cloud Function, look under the Details tab in the Cloud Functions section of Google Cloud Platform and it will show you that information. Also, under the Variables tab, you can see what secrets your Cloud Function has access to.
This process makes it really easy to manage third-party keys as you can manage them at your project level and not have to worry about them being stored else where or needing to manage .env files. It also works with the Firebase Emulators and uses the credentials of the user running the emulators for access.
I am work on a web application as an interface with Google Cloud Storage(GCS).
I am using a backend service to retrieve the list of files I stored on GCS and their URL with the JSON API and return that to my web application. However, I was not really able to load the files through those URL, which always came back with 403 forbidden.
I am not sure how GCS authentication work behind the scene and whether it is possible to directly grant access to web application. I am not sure how could I attach application authentication information via http request. What I know is I can do that via the backend service but for the reason of simplicity I wonder if it is possible to get around with that. One of the thing I tried is adding the web application domain(which will be sent via referrer in http request) into ACL to that bucket, which doesn't work at all.
And thanks to what #Brandon pointed out below. I am ok to grant anyone whoever have access to the application to view the content of the GCS since it is an internal app and I have already checked their authentication when I first serve the web application.
====
Solution
I ended up using the signedUrl that expire in 5 minutes and I highly recommend interact with gcs using gcloud (Their python document is really good). Thanks again for the thorough answer!
You have a user on a web browser who wants to download an object that only your application's service account has read access for. You have a few options:
Expand access: make these object publicly readable. Probably not the best choice if this info is sensitive, but if it's not, this is the easiest solution.
Give your app's credentials to the user so that they can authenticate as your app. This is a REALLY bad idea, and I probably shouldn't even list it here.
When a user wants to download a file, have them ask your app for it, and then have your app fetch the file and stream its contents to the user. This is the easiest solution for the client-side code, but it makes your app responsible for streaming file contents, which isn't really great.
When a user wants to download a file, have them ask your app for permission, and reply to them with some sort of token they can use to fetch the data directly from GCS.
#4 is what you want. Your users will ask your app for a file, your app will decide whether they are allowed to access that file via whatever you're doing (passwords? IP checks? Cookies? Whatever.) Then, your app will respond with a URL the user can use to fetch the file directly from GCS.
This URL is called a "signed URL." Your app uses its own private key to add a signature to a URL that indicates which object may be downloaded by the bearer and for long the URL is valid. The procedure for signing URLs is somewhat tricky, but fortunately the gcloud storage libraries have helper functions that can generate them.
I've been googling the entire afternoon and I'm still not able to figure out what's the best solution to implement the following:
We have build a webapp in AngularJS that consumes interacts with REST API build using Symfony. The app allows users to register, login and do stuff. Now, these users need to upload very big files (>60GB) into their personal folders. A separate VM have been setup for this purpose (data server), located in the same VLAN as the frontend, backend and the MySQL db serving the data. The data upload will be done using either HTTP (using JQuery File Upload plugin) or an FTP client.
I'd like the users to authenticate into the data server (both via FTP or HTTP) using the credentials they already have for the app. For the FTP case, I'll use PureFTP as FTP server, which validates user/pass directly from the MySQL. As far as I know, this is the most convenient solution, but criticism is accepted.
For the HTTP upload, we could proceed in a similar way: POST user/pass, validate against DB and return true/false. Since all the communication will happen within the VLAN, security issues are less problematic. Nonetheless, I believe much more sophisticated solutions have already been developed.
My first thought was to build an OAuth server on Symfony and then authenticate the uploader (and future services) with their respective clients. Is this a right approach or is this a too complicated solution?
Alternatively, a service in the dataserver could validate user's credentials sent by the client against the REST API, receive a JWT and generate a new session for that particular client to list and update files on a particular folder. I'm not sure how to build this middleware though, do I need another Symfony instance or a simple PHP script will do the trick?
Please do not hesitate to share any thought you have on this. Any point of view will be much appreciated.
Thanks a lot
I am building an iPhone app that stores user logon credentials in an AWS DynamoDB. In another DynamoDB I am storing locations of files (stored in S3) for that user. What I don't understand is how to make this secure. If I use a Token Vending Machine that gives that application an ID with access to the user DynamoDB, isn't it possible that any user could access the entire DB and just add or delete any information that they desire? They would also be able to access the entire S3 bucket using this setup. Any recommendations on how I could set this up securely and properly?
I am new to user DB management, and any links to helpful resources would be much appreciated.
Regarding S3 and permissions, you may find the answer on the following question useful:
Temporary Credentials Using AWS IAM
IAM permissions are more finegrained than you think. You can allow/disallow specific API calls, so for example you might only allow read operations. You can also allow access to a specific resource only. On S3 this means that you can limit access to a specific file or folder , but dynamodb policies can only be set at the table level.
Personally I wouldn't allow users direct access to dynamodb - I'd have a webservice mediating access to that, although users being able to upload directly to s3 or download straight from s3 is a good thing (Your web service can in general give out pre signed urls for that though)