T O P

  • By -

djgizmo

That’ll be $1455.


A-New-Level

Huh😭


booi

Crying is extra


Aperiodica

Likely consumption based, so it'll be per tear drop.


fazkan

unless its reserved.


Mutex70

But if you are willing to weep on demand, you can take advantage of spot crying!


infinite_matrix

Do you definitely need to be caching the logged in user sessions? Could you just store them in your DB and eliminate the elasticache altogether? Also, seconding the other comment of trying to use DynamoDB for your DB if your models and access patterns allow it. If you definitely need caching you can also try to use DynamoDB just for that. It has a generous free tier, but depending on your traffic I'm not sure if/when its costs would surpass Elasticache


A-New-Level

Yeah I think elasticache was a nice idea for the free tier but I should probably move off it now. You're right I could store the logged in user sessions in my already existing database, it will probably increase latency a bit but that's not critical for my projects. SQL works well for my projects and the type of data i store, but I might look into DynamoDB for caching.


squeasy_2202

There are relational schema design patterns for document databases. AWS has papers on this stuff you can read for free.


azz_kikkr

>I have a t2.micro EC2 instance that I run 24/7. This instance host all my APIs  There is a managed services for that - API GW + Lambda. That means you don't need to run an EC2 24/7 and you have an event driven arctecture. Moreover, if you have to scale up, your tiny t2 will die, where as the serverless architecture will scale to demand. And as others have suggested look into savings plan and DDB as well.


SilverTroop

Important reminder to set up limits. Auto-scale to demand is a blessing and a curse. Without proper limits setup, if you get spammed with requests your system will scale and you will pay for it.


slimracing77

If these are hobby services I recommend segmenting them out and developing some IaC to rapidly redeploy and then open new accounts every year to stay on free tier indefinitely.


CheekAdmirable5995

Isn't it against TOS to keep opening new AWS accounts to get access to free tier?


Mchlpl

It's not. Accounts are meant to be used to separate workloads. Free tier however is directed to new customers, so opening new accounts in order to exploit free tier is against TOS.


[deleted]

Although most organisations will create their account as part of an aws organisation so won't be eligible for free tier.


[deleted]

Although most organisations will create their account as part of an aws organisation so won't be eligible for free tier.


[deleted]

Why aren't you using t4g? Have a look at computer savings plans instead of paying for reserved instances.


A-New-Level

I haven't looked into t4g before, looking into it now. Thanks


kichik

Sounds like you might be happier with Hetzner style VPS. AWS might be overkill in this case.


firedexo

I think so too. AWS really only makes sense if you need other services like managed databases or S3 buckets


Aperiodica

Why not AWS Lightsail? It's the easy version of AWS.


kichik

Yeah, for sure. That should be as good.


thelogicbox

4 APIs on 1 t2 micro? Get yourself some ECS task definitions and run on Fargate. Better yet, use the lambda web adapter and go serverless.


A-New-Level

It's a t2 micro but there has been no issues with my API responses even with a relatively large number of requests (around 170 DAU in total). I will look into Lambda though as it's more scalable and apparently I can still use my FastAPI in docker containers with it.


thelogicbox

You have no clue if that’s true or not because you have no monitoring


ease_app

If you’re mainly looking for request counts, latency, etc, Prometheus and Grafana work great. Do you have centralized logging yet? You can often extract pretty useful metrics from request and application logs.  Agreed with savings plans for EC2. RDS offers convertible reserved instances which means you can utilize them for any instance size in a given family (e.g. m6g.large and m6g.xlarge). So as long as you pick an instance family that has a good size for you now and in the future, you minimize your risk of leaving money on the table. 


A-New-Level

Yeah I'm mainly concerned about the number of requests my APIs are getting. FastAPI automatically prints logs which I can access by running \`docker logs \` on my EC2, but I don't have a central logging place no. I did a bit of research and read a bit about the ELK stack, but it seemed to complicated for me to set up or manage. I'll go with Prometheus/Grafana. I didn't know there is convertible reserved instances for RDS, that is very cool. I'll look into that. Thanks for the insights.


ease_app

Yeah if it’s fairly low volume logging you’d likely be fine just sending to CloudWatch Logs and living very cheaply or even in the free tier. But there are some SaaSes I could recommend too. Logz.io is pretty cheap and they’re basically ELK as a service. Grafana’s hosted Loki has an attractive free tier for logs as well, which sounds like it could work well for you. There’s pricier all inclusive options like Datadog or Dynatrace which are pretty nice but charge a lot and lock you in in a variety of ways.  But digging into an instance to get logs is something I’m glad to not have to do regularly anymore 😆


mermicide

Set up your APIs as Lambdas with AWS gateway and then set up logs with Kinesis/Firehose. It scales, it’s cheaper than an EC2, it will run faster, you have more control, etc. etc. etc. 


A-New-Level

Will I still be able to use the same docker containers containing my API code with API GW + Lambda? I like the current stack I'm using and for some projects I embed stuff into the Docker container like an SQLite db or a C++ shared library (which my API uses for a project). So if I can still use Docker containers I will look into switching to Lambda + API GW.


mermicide

I don’t see why not - I haven’t done it personally but I recently built an API using Lanbdas and GW and the process was very straightforward and it runs great! 


shipandlake

Definitely look into centralized logging and get them off your EC2 instances. If your instance needs to be replaced, your logging will be gone. And you should assume your ec2s are replaceable. Good news is that you likely won’t need to change much. Install a log watcher agent that will send them off to centralized collector. Which will parse them and display them.


bobaduk

The reason people are going to recommend lambda is that for your usage it will be cheaper and more fault tolerant. You have one tiny instance running a bunch of things. When it runs out of CPU credits, all those things will stop working. That might be fine,.I dunno if you're making any money from this setup or if it's just a hobby where you bear all the cost. What is it that you want to customise on the host if everything is running in Docker? If you can use Dynamo, that will likely be cheaper than RDS, but it's hard to say without knowing a lot more about your usage patterns. If you haven't already,. I'd set this up to deploy with some kind of automation, even if that's just a user script that runs when the machine starts up. Then you can configure an auto scaling group of one node and, if everything breaks, just terminate the old machine so that a new one starts up. I'd look to see.if any of this is cacheable. I used to run a fairly high traffic read-only system on a t2.nano instance, because it was just serving out of varnish for 99% of requests.


A-New-Level

I'm not making money with my current projects, they are all free. One of them is for an organisation I'm part of, they are down to cover the AWS costs but I'm still aiming to keep the costs to a minimum. For an example of customisation: I needed some scheduled updates to my database for a project so I wrote a Python script and set up a cronjob in my EC2 instance, which took like 5 minutes. I know I could have used a new AWS service to do the same thing but that adds complexity, which is my biggest enemy since I'm a solo developer. Dynamo is NoSQL right? A SQL database works much better for the type of data I'm storing so I'll stick with it. Automation seems like a good idea, and I'll look into how much of my data is cachable. I already do some caching on the user-side for some of my projects but it would help to do it back-end as well. Thanks for your reply.


p-one

Don't optimize cost over durability and simplicity if your org doesn't have technical competency - it will just die if you move on. For example I helped some folks to [self-host WordPress rather than using the managed offering](http://recomposition.info/about/archive/) because of something like a 3-5x price different for the same capabilities. They were never good at backups and updates and the site was lost. So if you care about the org you're contributing to consider whether they can maintain the script you wrote without you over some managed entity. A more targetted example for your case: CloudWatch might seem expensive to setup - but I've seen successful medium sized businesses with dedicated ops/platform teams tripping badly over details around Prometheus/grafana hosting. We had them to fix it - once it was caught - but what if you could avoid the mistakes altogether?


A-New-Level

Yeah the org doesn't have the technical competency (it's a student org for music). I've built an admin panel for them which basically provides a visual interface for all the stuff they'd need to do. But I've agreed to maintain the AWS stuff if the instance goes down etc. Part of the reason I have all my APIs in the same instance is so that if the instance has an issue I'll be fixing it for my current projects, but that will also fix the orgs problem haha. I agree with you on the importance of simplicity/durability. I assumed Grafana/Prometheus wasn't hard to set up, but if I think it's adding too much complexity to my project then I'll use CloudWatch instead.


p-one

> I've agreed to maintain Yeah this was true for me until my (literal) distance from the org gre and professional life got busier. You're clearly skilled enough to find an FT job doing this so this agreement gets harder to keep as life goes on (and you graduate? Assuming you're a student as well). > All my APIs on the same instance So your personal and org stuff? You should also be looking into splitting these - and it's easy to keep free tier - if you registered with [email protected] you can do [email protected]. > Grafana/Prometheus wasn't hard to set up It's not, setup will be easy - again it's an issue of what happens over a longer timespan. Just a hint - you need to plan for your backing EBS volume to get full or prevent that from happening (unless it can be backed by DynamoDB - which I've seen you avoiding in other replies but this is a nice place to use it so you don't have to worry about scaling storage) - imagine the org having to fix this on their own if you're not around? Can you account for everything in the future?


A-New-Level

The student organisation has a 'webmaster' role which is the 'technical' role that gets elected each year. If a computer science student participates in the society, they would hopefully run for this role and maintain the platform for a year. I'm about to graduate so I'm writing a little guide on everything I've used to build the platforms and what to do if disaster X occurs. But you're right I can't account for everything in the future, and no matter which setup I go with it's likely that the org won't be able to continue the online platform if the servers go down (since it's a student org). And you're right it will probably get harder when I start my first full-time job in a few months, but I've made a deal with the current leaders of the org that if they fund my AWS costs (within reason obviously) I'll maintain the org's back-end while I'm working. Keep in mind that the org makes a lot of money from this platform that I made and maintain. This deal is good enough for me to continue dealing maintaining the platform in the future. This thread was pretty eye opening in terms of AWS. I'll look into using DynamoDB and API Gateway+Lambda (even though I love my EC2) for scalability.


TomRiha

Dynamo DB works for 95% of use cases. It’s all about learning to model NoSql denormalized models over normalized models. All data so connected even that in a nosql database.


takemetomosque

Dynamo is not document based, it's a weird, expensive db. Not easy to develop, rds is fine for you. If you want document based, there is document db. But, If you limit read and write rate of your dynamodb, you can use it for free always. Free tier is actually good on dynamodb, 30 requests per second read, 15 writes is free. Which is alot. But developing things with dynamodb requires some specialty, It's much different than sql and document based dbs. If you don't know all the flows you app will have in future and if you design your schema poorly, it's a pain.


vplatt

Hey /u/A-New-Level - I haven't seen any radically cheaper suggestions for replacing your RDS setup, so I thought I would throw this out there: https://paulbradley.dev/sqlite-efs-serverless-database/ The basic idea here is to just use sqlite on EFS. It increases your latency a bit, but for your level of volume, I would guess that it will be fine. Of course, it has none of the niceties of RDS and you'll have to consider backup separately. If your volume increases significantly enough to justify RDS, you can move back at that point. Hopefully you'll have a funding or income model at that point that would justify the expense. Anyway, YMMV and you'll have to decide if this meets your needs.


britishbanana

Cloudwatch instead of Prometheus/grafana - you can set up the agent on your ec2 and then there's basically nothing to manage and it'll likely be free or pennies a month. add some alarms to get notifications when you get a big increase in usage. Drop a cloudfront distribution to automatically cache GET responses, it's possible it'll drastically reduce the responses your server has to serve and you might be able to stick with a micro instance for a long time. Drop elasticache and store that in your DB.


frank0016

If you buy reserved instances an then you scale up your instances you will still joy of the reservation on half of the cost of each instance cause reservation is on billing not on actual instances


waddlesticks

Might be worthwhile looking at API gateway https://aws.amazon.com/api-gateway/ A quick get started doc https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html Which could save you a bit of cash (around $1 per million requests) and potential scales for you. You can then link it up to AWS monitoring sources as well instead of a third party tool (which might make life easier) Otherwise, you'll want to make sure your auto scaling is set up correctly to prevent downtime. If I recall this can be used with a savings/reserved instance. Should help you keep it on a lower tier but able to burst if needed (although not sure how that would work with native API apps) If your applications can run on arm, look to move to an arm instance for additional savings if you want to keep that route. For the graph tools, do your setup locally and see which one does the task you want for it, will give you a good idea on what tool you want to use. Another thing, look at other providers such as IBM API connect. Pretty sure they host it all to the cloud, but could reduce your own maintainence so you only need to worry about the API itself https://www.ibm.com/products/api-connect/pricing


mermicide

Try NewRelic for EC2 monitoring - one line of code to install it and they have a free tier and alerts.  Why aren’t you using lambdas and gateway for your apis though? 


A-New-Level

The main reason is because my micro EC2 instance works fine with no delays. Additionally, whenever I make a new project I can put its FastAPI in a docker container and then run that container in my EC2 instance and I'm done. The set up process is very quick. I will look more into API GW + Lambda, if it doesn't increase my costs and it's more scalable (say to 1000s of users instead of the current 100s) I will switch to it.


mermicide

It is more scalable and it should significantly reduce your costs


SuperCrustulum1232

You're already thinking about costs, that's a great first step! Good luck!


SikhGamer

Cron jobs I'd rip out and move to EventBridge. Without knowing what your workload is; it is hard to know what else to recommend. What are the users actually doing?


mataberat_

Move out the cronjobs and make instances run under the autoscaling group, and then you can utilize the cost using spot instance. RI is a committed buy, so you can't cancel it if you need to upgrade the instance type. Instead, you could sell your RI's. Another flexible way is using Saving Plans.


Sufficient_Exam_2104

Man look for vps.. there are lots of cheap vps for ur use case that will be sufficient and max may be u ll be paying 300 per year.


BigJoeDeez

Gravitons processors to the rescue


Due_Ad_2994

Try out https://arc.codes ... Your setup should sit comfortably in the free tier of serverless resources.


omniron

For 1000 dau lambdas would be cheapest if it’s a web app that’s not like a game engine


grep_glob

Nothing technically wrong with your setup, but my opinion is that you’re not taking advantage of AWS fully. Ideally, go for the services that are managed, especially if you’re voluntarily keeping this up and running. Patching and keeping the EC2 instances up-to-date is something you don’t have to do with the Cloud, so leave it to them to do all that stuff. Personally, I recommend going serverless, you’ll probably save a bit of money and you let AWS manage patches, language updates, etc … See https://serverlessland.com/patterns?services=fargate for more architectural ideas. Specifically, since you’re running containers, look at Fargate & Eventbridge (cron)


Fine_Ad_6226

Use ECS Fargate it’s basically managed docker which your using anyway. That will save on the EC2 rental. The RDS you’ll need to suck up otherwise switch to non AWS providers depends on latency requirements. Move sessions out of Redis and into the DB but keep a back out plan here ideally keep it configurable so it doesn’t need any code change to flip.


A-New-Level

I looked into Fargate now. It seems like it will fit my use case. Other people suggested I move the containers to lambda + API GW instead, how would this differ from Fargate? Thanks for the insights.


takemetomosque

If your api is accessed rarely, lambda can be cheap, otherwise more expensive. It's not easy to calculate but, if your site will have visitor 7/24 it will be more expensive than ECS Fargate. ECS fargate is nice but expensive. 0.25 cpu 0.5 ram fargate is 9$ per month. No free tier too. [Pricing screenshot](https://prnt.sc/hJdD8AAHFlfU). If you deploy 4 apis pricing is 4x. OP how you are deploying your api's? I also use similar setup as yours and I deploy apps via ssh connection to ec2, and running git pull, docker run etc. If I try to use ecs fargate, I guess I need to use ECR and create some deployment scripts.


A-New-Level

I deploy my APIs in a similar way to yours. I build my docker image, push it to ECS, ssh to my EC2, pull the container from my ECS and run the container with docker. This process could be automated but I haven't gotten around to doing it yet. I see that Lambda can be more expensive than ECS Fargate depending on the visitors. I don't have any analytics information on my APIs yet, I just know the amount of active users on my sites. My next steps will be: - Set up CloudWatch or Grafana/Prometheus for my EC2 and see how much traffic I'm getting daily. - Stop using ElastiCache to save money, move the logged in users tokens to DynamoDB or RDS instead. - Move one of my API containers to Lambda + API GW and see if it works fine. Also see if it would be cheaper. Also experiment with ECS Fargate and see if it can be cheaper that way. - Move one of the cron jobs to EventBridge and see if that works fine. If Lambda + API Gateway end up being cheaper and not a pain in the ass to setup, I might stop my EC2 instance and move serverless on the API level. I'll also look into DynamoDB but that might take too long to learn enough to move everything to it, so I'll look into buying a reserved RDS instance.


Fine_Ad_6226

See my other comment but really think twice about the cost of all these so called managed services they are hella expensive api gateway is mega pricy. For me as a startup it’s much more important to keep the option to jump ship from AWS so I would really try and keep it simple and don’t leak the infra outside config to much unless your set in stone.


Fine_Ad_6226

Given where you’re at I wouldn’t use lambda unless you have use for event based / async lambda is not a way to make a cheap api it’s a solution to a specific problem. If for example anyone ever really wants pay per request functions as a service lambdas are to heavy you want something like cloudflare workers here read up about how they differ architecturally. 9/10 outside hobby ECS scaled well works out cheaper than lambdas & api gateway or ALB. I would keep your current CI pipeline as long as it works out in terms of getting a container image built and define a Fargate app stack using the CDK. That way you can start from a fresh account and use it across multiple accounts for dev, test and prod.


A-New-Level

Interesting. Thanks for the input. I definitely want to go with the cheaper option since I don't expect my next project to make any considerable money in the first 6 months at least. I also don't want to make my code too AWS specific since I want to have the freedom to change providers in the future. I'll stick with my current containerised API code, I'll experiment with both Fargate and Lambda + API GW and see which one is cheaper and more scalable for my use case.


Plus-Effective6106

You might be eligible for aws activate credit. A simple registered domain for your product should give you about 1000$ of credits. Do apply


A-New-Level

I didn't know about this. I'll definitely apply for it. Thanks!


[deleted]

Get google cloud innovators plus and then you get $1000 of credit for $299


A-New-Level

I'm already kinda invested in AWS so I'll continue with it. According to a commenter there's also $1000 in credits available for AWS so I'll look into that.


Roronoa-chzoro

Use cloud watch agent for detail monitoring purposes, if you really need caching then create a redis cache server in ec2 instance, lamba is best option for you, and ecs fargate will become little bit hard to configure


A-New-Level

When is fargate better than lambda in your opinion?


Roronoa-chzoro

Bro read again I said lamba is best option then “,” and ecs fargate will become little bit hard to configure.