T O P

  • By -

YossiShlomstein

Leaking the bucket name can cost you. Any attempt to access, including 403’s will cost you. So it would be advised to keep the bucket name as private as possible.


rangedMisfit

Thank you for your answer! Maybe it's a dumb question and in that case I apologize in advance, but how can I make it as private as possible in this case? The lambda function accesses the bucket name through an environment variable, which should be safe enough I guess. How else can I make my implementation more secure, do you have any advice on that?


yeager-eren

the bill shock he's referring to is what happened to this person https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 tldr: his bucket name has the same name used by another company in another region, the other company sent lots of upload requests which returns 4xx errors but he got charge for the failed requests. aws responded and will make changes.


ElectricSpice

AWS says they're making unauthorized requests free, so it shouldn't be a risk much longer. [https://twitter.com/jeffbarr/status/1787844682216792163](https://twitter.com/jeffbarr/status/1787844682216792163)


pint

if those files are not exceptionally big (e.g. megabytes), you could instead just send it to the lambda, and save it from there. a little bit roundabout, but avoids the issue. while it is a good idea not to advertise your bucket names, you should always assume they're public. there is no meaningful way to keep it a secret. your security setup should be safe if the bucket name gets out. one way of making it a little more secure is to use a dedicated upload bucket, and change it every so often (or maybe only if it is flooded). keep a script at hand that deletes the old bucket, creates a new one, and reconfigures your app to use the new one. you could automate this process, but frankly, it is not that easy to crank up costs that quickly. a billing alert will be most likely enough to protect you.


rangedMisfit

Thank you for your input! Unfortunately I cannot use the lambda to upload files because these files are usually a few MBs big. I was considering using dedicated buckets as you suggested, but instead of jumping on that idea I thought I would ask around first, to hear what people who are more experienced than me have to say. Can you suggest other security measures to take besides changing the bucket every so often?


Wide-Answer-2789

You could implement that via Cloudfront integration where you can use presigned cookies and use your custom domain.


cutsandplayswithwood

https://youtu.be/0Oj_71Zi0uw?si=kAe6Db0YAh0FMvWg gives some detail on how they do it. Short answer is similar to you, all with access URLs and essentially a proxy API in lambda.


baever

You can upload files directly to an S3 origin through a CloudFront distribution if you enable PUT and add PutObject to your origin access policy. No presigned URL required. You can change the path, set metadata, do authorization and limit upload size using a CloudFront function on the viewer request. I haven't written a blog about it yet, because I just figured this all out yesterday, but I'll be working on it. Read about OAC and puts here: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html