We develop an S3 proxy product. It implements part of the S3 rest api.
Is it possible to configure snowflake to a different ip address (like "--endpoint-url" in aws cli).
Thanks, Tzahi
You can point snowflake to S3 buckets for reading/written importing/exporting data, and you can make eternal tables to S3 buckets, which means, yes you can redirect them, but as for their internal S3 no.
Of those two, redirecting a bucket you already control seems of low value, as you control it.. and the later you cannot do.
Related
I have a REST API that supports a multi-user React App that I have been running on an EC2 instance for a while and I am moving into uploading photos/pdfs that will be attached to specific users. I have been able to accomplish this in testing using EFS which I had provisioned to the same EC2 instance I had the API on but I am looking to move towards S3.
When I was testing it out using EFS, I would send everything through the REST API, the user would do a POST and then the API would store the file in EFS along with the metadata for where the file was stored in my DB, then in order to retrieve the data the user would do a GET to the REST API and the server would fetch the data from EFS based on metadata in the DB.
I am wondering what is the usual use case for S3? Do I still have to send everything through my REST API if I want to be sure that users only have access to the pdfs/images that they are supposed to or is there a way for me to ensure their identity and directly request the resources from S3 on the front-end and just allow my API to return a list of S3 urls for the files?
The particular use case I have in mind is making it so users are able to upload profile pictures and then when a user searches for another user by name, all of the profile pictures of the users returned in the query of users are able to be displayed in the list of results.
As far as I know, there is no "normal" way to deal with this particular situation - either could make sense depending on your needs.
Here are some options:
Option 1
It's possible to safely allow users to access resources directly from S3 by using AWS STS to generate temporary credentials that your users can utilise to access the S3 APIs.
Option 2
If your happy for the pics to be public, you could configure a bucket as a static website and simply use those public URLs in your web application.
Option 3
Use Cloudfront to serve private content from your S3 buckets.
I have developed a react native app, with AWS Amplify to support the backend (DynamoDB, S3). All users of the app have to use Auth.signIn() to sign in and are part of a user pool.
Once in, they can start to upload videos to S3 via the app or view videos in the app that are in the S3 bucket that is PUBLIC.
I use the path to the S3 video (https://myS3bucket....) as the source URL of the video. However the videos are only visible in my app when the bucket is public. Any other setting (protected/private) and no video is visible. How can i make this more secure?
S3 Buckets have 3 methods of managing security:
IAM: Any user or role within the same AWS account as the bucket can be granted permissions to interact with the S3 Bucket and its objects.
S3 Bucket Policies: Grant bucket wide (or prefix) access to S3 buckets.
S3 ACLs - Per object level permissions.
Its generally advised against using S3 ACLs these days as their functionality was improved via S3 bucket policies. Only use them if you need a specific object to have a different set of permissions.
I sugeest not to make files or the bucket public if you want authenticated users to upload and/or download files. For this, use S3 signed URLs to give users access to files. In other words, the backend will authenticate users accordingly, generate them signed URLs and then the react native app will interpret that URL accordingly, ie a video file.
You will need to change a few things but this guide should cover that
I have recently published an article which describes in detail the security best practices, which help address the following points:
How to secure an S3 buckets, which store sensitive user data and the application code.
How to securely configure a CloudFront distribution.
How to protect frontend apps against common OWASP threats with CloudFront Functions.
To learn more have a look at the article.
Best,
Stefan
I'm writing an app where users can write Notes, and each note can have many files attached to it.
I would like users to be able to click 'Browse', select multiple files, which will be uploaded when the user clicks 'Save Note'.
I want these files to be uploaded directly into Amazon S3 (or some other cloud storage solution?) without going through my server, so I don't have to worry about uploads blocking my server.
What is the best way to accomplish this?
I have seen many examples to upload directly into Amazon S3, but none of them support multiple files. Will I somehow have to do this all in Javascript by looping through a collection of files selected with the Browse button?
Thanks!
Technically, your javascript residing in the browser could make HTTP RESTful calls to AWS and store data in S3, but then you would be exposing the security credentials to connect to AWS in the script.. not good.
I guess the only way is to process it thru a web-server which can securely access AWS and store the notes.. or, you could just write those notes to a local disk (where the webserver sits), and schedule tools like s3cmd to automatically synch them with S3 buckets.
I am building an iPhone app that stores user logon credentials in an AWS DynamoDB. In another DynamoDB I am storing locations of files (stored in S3) for that user. What I don't understand is how to make this secure. If I use a Token Vending Machine that gives that application an ID with access to the user DynamoDB, isn't it possible that any user could access the entire DB and just add or delete any information that they desire? They would also be able to access the entire S3 bucket using this setup. Any recommendations on how I could set this up securely and properly?
I am new to user DB management, and any links to helpful resources would be much appreciated.
Regarding S3 and permissions, you may find the answer on the following question useful:
Temporary Credentials Using AWS IAM
IAM permissions are more finegrained than you think. You can allow/disallow specific API calls, so for example you might only allow read operations. You can also allow access to a specific resource only. On S3 this means that you can limit access to a specific file or folder , but dynamodb policies can only be set at the table level.
Personally I wouldn't allow users direct access to dynamodb - I'd have a webservice mediating access to that, although users being able to upload directly to s3 or download straight from s3 is a good thing (Your web service can in general give out pre signed urls for that though)
I have a simple LAMP website on AWS. I want at least 2 instances, that will connect to RDS, and I'll have a load balancer.
The simple question, not answered on the web is: Where do you put the shared user uploaded files like the images?
NFS should be one answer. I mean,something like create another instance, sharing a folder through NFS and the other instances mount them. But it is a failure point.
Rsync folders between instances ¿?!!
The best answer I found is to use s3fs, and mount an s3 bucket on all my instances. But I don't like to use things not supported by Amazon.
What is the right solution?
Thanks!
SOLUTION
I tried s3fs but I experienced many issues (ie with Eventual Consistency).
So, finally I implemented the AWS SDK and used the S3 library to upload directly to S3.
Then pointed a domain to the s3 bucket directly.
Thanks to all
Why not put the files in S3? You don't need to use something like s3fs - just use the regular s3 apis to upload the files to s3.
Users can also upload the files straight to s3 without going via your server if use the post api
Like Frederick said, S3 is probably best, because the only other option is to put them on the load balancer and return them right away. But then if you do multiple load balancers with some kind of Round Robin DNS or something, you are up the creek, and have to copy them all over hell, and it just wont work. And if you are using the ELB, then you can't do that anyway.
As you say, you already are looking at s3fs. You are right - why have an extra layer of software.
Likely better is to generate time limited, signed urls - or if the files are truly public - just make that part of the bucket public, and directly use an s3 url to serve the files. You can also point a DNS at S3, so http://my-bucket.s3.amazonaws.com/path/to/file.png becomes files.mycompany.com/path/tofile.png.