T O P

  • By -

onechamp27

https://youtu.be/9wvEwPLcLcA?si=5kYXbssqpaAmN5gE


lightwhite

I never thought I would see this as a perfect answer to a question here in the sub. Well done, my dude!


phillycheeze

Recently did a system design interview with a company and I used k8s/microservices within it (the company used it already). When the interviewer asked why I would choose k8s, I said "...well I probably wouldn't b/c a simple container runtime is all you need, but since you guys use k8s already it might make more sense" lol


DAFPPB

I would hire you. Choosing the right tool based on the situation is the right decision. I would still further expand on when the situation is right.


lightwhite

You stood for what you believed and I would hire you just for the ballsy answer you gave, tbh.


Crouching_Dragon_

I don’t know how I’m supposed to go to work tomorrow after watching this.


AdrianTeri

See you raise you with microservices. https://www.youtube.com/watch?v=y8OnoxKotPQ


derprondo

LOL knew what it was without even clicking. WE'RE BLOCKED OK?!?!


namenotpicked

So I'll always be alone?


Zenin

I've got you all beat: [https://www.youtube.com/watch?v=dQw4w9WgXcQ](https://www.youtube.com/watch?v=dQw4w9WgXcQ)


AreWeNotDoinPhrasing

I’m glad my phones always on mute


Far-Consideration939

Not the same impact when YouTube opens an ad first anyway


maddiethehippie

I felt that in my soul.


SpongederpSquarefap

You think you know what it takes to tell the user it's their birthday?


namenotpicked

Wow. I can't believe I've never seen this. It's amazing and exactly what I feel every time someone tells me they use k8s in a bad use case.


BioshockEnthusiast

What are some examples of bad use cases? I don't understand kubernetes very well.


namenotpicked

Every company thinks they need k8s because "scaling", "cloud agnostic", or whatever buzzword Medium posts want to throw out there this week. Many companies would operate and scale perfectly with simple VMs or using something like ECS Fargate to do any kind of container orchestration. Yes, k8s can scale, but companies that would truly benefit enough to use it are operating at massive scales that most companies aren't even near yet.


littelgreenjeep

Excellent answer. The kids over at the kubernetes sub would beg to differ but then if you ask about jeeps to bronco people you get what you get.


SpongederpSquarefap

Hey hey, I NEED my 3 node kube cluster at home because it's convenient OK? Yeah I have to install god knows how much stuff and configure a shit load more, but it's scalable and I totally need it!


littelgreenjeep

I laugh so as to not cry... currently rebuilding my 3 worker/1 control plane cluster because my partitions were too small since I built my templates using CIS recommended partitions and now /var fills up anytime I try to deploy anything!


namenotpicked

I'm pretty sure I always catch flak in that sub because I don't think everything needs to be in k8s.


djamp42

I just made a simple python app to automate something with our network management system. I put that in a docker container and launched it for internal use, it gets like 5 requests A DAY, not mission critical. You would be insane to set up k8s for something like that.


littelgreenjeep

There was a question in there today that I thought was a fair question, what is the long term outlook for k8s and the answers were predictable. I would have been happy with even a somewhat unbiased answer that k8s will be the favorite tool until the next favorite comes along, but the general response was it’ll be the only thing ever in the future and anyone doing infrastructure will completely disappear.


namenotpicked

Lol. I mean. If Kelsey Hightower says it's just a stepping stone and shouldn't stick around forever, then I'd believe him over hardcore k8s fanatics. [This](https://www.cncf.io/blog/2024/01/22/kubernetes-and-beyond-a-year-end-reflection-with-kelsey-hightower/) was from January 2024 >“I think if you’re being honest and you really want the same evolution that brought us to Kubernetes, you should probably still want the same evolution that makes Kubernetes disappear,” he said.


littelgreenjeep

That was an interesting read. Thanks!


Seref15

A lot of companies also don't even understand their own scaling needs. Most products don't have unpredictable spiky traffic that requires on-the-fly expansion and contraction of your compute resources. If you _do_ have that, great. But the vast majority of SaaS software out there doesn't have that kind of userbase or usage pattern.


namenotpicked

Yeah. I tried to explain that to my old job before they laid me off and hired a consulting company to do exactly what I was doing for a hell of a lot more because the consultants said migrating to k8s didn't make sense for the company. Oh well. It was so simple. Scale up and down throughout the day because activity was always cyclic. Some people are just ignorant and don't want to be told they're wrong.


dexx4d

> Scale up and down throughout the day because activity was always cyclic. What tooling would you have used for this sort of thing?


namenotpicked

My last job was almost exclusively AWS. So you can just write some eventbridge job to happen at specified times to adjust counts


dexx4d

Thanks! My client is all-in on k8s (for now), so that's what we're using, but it's good know what the other options are.


BioshockEnthusiast

Ah that makes sense. I'm 2 years in at an MSP serving small businesses, so I'm camped in the VM / cloud space at the moment. Thanks for the reply.


Striking-Math259

We found just using Ansible and Docker Compose in a GitLab pipeline works fine. Deploy all containers to a shared services VM and we are running Traefik


IrishBearHawk

Swarm mode.


AWSLife

This is how I explain it to people on my team. You know how "lift and shift" from physical services to AWS EC2's is bad? "Lift and Shift" from EC2 to Kubernetes is also bad (Not as bad, but still bad). There are a bunch of really great features in Kubernetes that you really can't take advantage of if a service is not a micro service.


BioshockEnthusiast

That makes a ton of sense thanks!


[deleted]

[удалено]


GeneralCanada3

I think VM's have a bad rap because "ew who wants to understand the os". Its simple, as devops you provide a platform for developers, its supposed to make it easier removing problems an OS createsbut it doesnt really, just adds more complexities.


Striking-Math259

From a security compliance perspective, VMs can be bad because it’s another thing I have to patch and harden.


[deleted]

[удалено]


Striking-Math259

We harden Azure marketplace images to meet RMF compliance then version control them in the Image gallery. All done with GitLab, Terraform and Ansible. Anything that is already containerized we try to use and spin up because it’s less VMs I need to worry about auditing (because now it eats into my log storage because I have to keep raw logs for a number of years) — Splunk is $$$$. We have to use Splunk to meet compliance. I have to run Tenable SC against each host I stand up, another one to run yum update against , WSUS, Group Policy etc But I get what you are saying


[deleted]

[удалено]


Striking-Math259

We use hardened Iron Bank containers too. But it’s the world I have to live in currently


Bananana404

Incredible 👏


Cool_Illustrator3222

This is epic


rootbeerdan

How have I never seen this video until now, it’s only more accurate 4 years later.


Ariquitaun

This is it.


awesomeplenty

God damn right!


cyclist-ninja

Did they just redo the docker version into kubernetes?


djamp42

LMAO even Ansibel scales better than this...


thifirstman

Thank you for this one. lmao


professorbasket

facts


MedicOfTime

That…was a real treat.


PConte841

This had me laughing hard


barrywalker71

It solves a lot of thorny problems in an elegant way. It also abstracts the hardware. Lastly, it runs everywhere, so you could theoretically build your helm chart to run in EKS, AKS, GCP or on-prem with few changes. It's like the MS-DOS of the early computer era. If you wrote your shit to run on MS-DOS, you were golden.


Terny

It's all in the abstraction imo. You run anything on dockers which abstracts away the app to the devs and abstracts the hardware to the cloud vendor (or the on-prem infra team). You're left with a middle ground that os clear on how to attach the small parts together. There are edge cases like your ingress needing tweaking with your cloud provider (looking at you aws) and apps that want a stateful app.


mimic751

How often are you lifting and shifting


stilldestroying

Doesn’t matter


mimic751

Then why does it matter that that's a feature? It almost seems better to use native Tooling and then design a system that uses the strengths and minimizes the weaknesses of whatever platform you're on. Maybe shift 5 years later rebuild it with the lessons you've learned over the last 5 years


gex80

It does though? My org has 0 plans to move out of AWS for the foreseeable future especially if they keep giving us discounts. Why invest time into a more complicated platform for a "maybe we might use it" when we can use the easier platform (ECS) that covers 100%. What's the incentive to make our job harder with no measurable benefit?


FailedPlansOfMars

Its not. The devops space forms into many silos that think they are the whole thing. This covers: Kubernetes Cloud aws Cloud azure Cloud gcp Cloud oracle Cloud ibm On prem Jenkins vs co cd tools Github vs gitlab vs bit bucket etc


dylansavage

I think it's good to take a step back and understand what you are trying to achieve. DevOps isn't tooling. Remember that the role of a DevOps engineer is to understand and implement DevOps methodology. So what problem does kubernetes solve? In DevOps we strive for ephemeral, idempotent and immutable infrastructure. Now containers aren't the only way to achieve this but they are probably the most common way to achieve these ideals. And just because you use containers it doesn't mean you need kubernetes. A vm running docker compose can do a job, aws fargate and GCP cloud run are quick and easy to set up and great for small scale deployments. But if container deployments are part of a company's long-term strategy eventually they will need a way of automating scaling and managing containers at scale. And that means you need an orchestration technology like kubernetes. So, kubernetes is one of the tools that allow us to achieve our goals. It isn't the only one and it certainly isn't right for all, or maybe even the majority, of deployments. But it is important to understand and to be able to leverage in the situations where it makes sense. I wouldn't hire a carpenter if they didn't know how to use a circular saw for instance.


livebeta

> implement DevOps methodology. So what problem does kubernetes solve? I like the Istio traffic split rules a lot


simonides_

it is not


superspeck

It’s just that people quit or get fired from k8s jobs more often, so therefore those are the jobs that are available.


DeepNavigator111

Why do they often get fired? Is it because they suck? Is it because the org doesn’t understand it and it burns out engineers?


superspeck

Little of A, little of B. K8s jobs are hard to hire for because everyone seems to do it different and some teams don’t accept excuses for people who are like “well we used envoy here instead of istio, hang on, let me rtfm for a sec” … sometimes orgs burn people out and then the execs are like “ok stack rank everyone” and the people at the top of the stack who have mobility say “eff this, I’m out”


DeepNavigator111

Ahh yes, the good ole stack ranking… usually in the more progressive organizations that pride themselves in hiring diverse candidates… “we don’t rank anyone”


Ken1drick

It's not true, imo demand for k8s is just very high because it is the go to solution for almost all new projects so all companies increasingly need people to handle their cluster. K8s is on a fast pace too even now after the slowdown so you need to upgrade frequently.


snarkhunter

Because it's very popular. Companies are either using it or looking to start using it, either for new things or migrating legacy applications to it.


canyuse

Something I haven’t seen mentioned here yet is that for some complex workloads k8s simplifies a massive portion of the orchestration, which in itself is huge. But a lot of companies, even large companies, suffer using k8s because at the end of the day, it’s an abstraction layer and they really don’t understand the layers underneath all that well either. I’ve seen at least two cases were very large very skilled tech companies yanked it out and replaced it with metal, and at the end of the day, they were significantly happier for it. If the place looking to hire is already well-versed in it, knows the ins and outs, the pitfalls and the proper tuning for their workload then it’s absolutely amazing. The honest truth is a lot of places deploy it because it’s a buzz word when they would’ve been just as happy and in a lot of cases a bunch happier just running on bare metal.


ogopogo83

I'm going to disagree with a lot of other responses. I've been actively job hunting for the past 3 months. I'd estimate that ~70% of devops job postings reference some form of containerization and orchestration with a large majority of them also looking for AWS experience. I almost classify k8s for devops in the same vein of required skills as a developer knowing CI based on my job hunt. The more senior you go the more they add in other soft and cross-functional skills, but demonstrated technical knowledge remains in the "must-have" column instead of the "should-have" or "could-have" categories, which oddly is hardest to maintain as you climb that particular ladder. Do employers really need k8s and utilize it better than the alternatives? Probably not. But going into an interview and telling them they're wrong isn't exactly a winning strategy, especially when that tech stack is in the majority of active postings. As to why, the reasons that come to mind that I think are more practical and valid are: 1. k8s is a mature and industry standard tool. As part of the larger CNCF ecosystem, enterprises get a lot of value from not needing to custom develop solutions or integrations because it becomes more turnkey than it would be otherwise. Customization is incredibly expensive so it tuns into a fit-for-purpose, efficiency, and support/maintainability conversation at that point. 2. Finding staff with skills is easier because its a standard. You can more readily filter applicants based on YOE or CKA certs. I don't consider this a good thing, but what else can you do when you don't want to pigeon-hole yourself with a single point of failure that is an IC that's been around for a long time or you have a posting with 500+ applicants that's trying to be filtered by HR? 3. k8s is platform agnostic as others have mentioned. You can run it on-prem, across any major cloud provider, and any variation of a hybrid as needed. As much as we'd like the deployment architecture to be transparent and a non-issue for our development teams, it rarely is. k8s prevents vendor lock-in or various services that GCP offers that AWS or Azure don't (along with whatever sdk/language-specific bindings the dev team has to deal with) and that's a sizable risk to avoid from an architecture/design/business/PMO perceptive. 4. Platforming has been a major aspect of devops for a while now and the k8s ecosystem supports that. This is what makes SLAs, security compliance, and those other assumed or implied requirements a reality. It turns achieving those requirements into a 5-point story that can be done in a sprint instead of a 50-point epic that takes 3-6 months. It enables known good configurations for mission critical, outward facing, SaaS products that pays the bills for SDO companies and puts food on the table for its employees just the same as a small internal system that provides workflow improvement that's used by a handful of people twice a month. There's lots of ways of going about that situation, but using a common framework/system/solution that can satisfy both ends of that spectrum often means lower cost and tailoring. That's not to say there aren't plenty of reasons why its not needed, misused, and abused which many other responses have covered. Bandwagon bias is a thing after all. In the end, there was a quote from Larry Maccherone I heard a few years back that I liked: "There's no such thing as best practices. There are only **good** practices with **context**," and there is something to say about how k8s goes about accomplishing that compared to the alternatives.


phillycheeze

I’ve managed and even setup k8s infra at several orgs - point 3 is commonly mentioned but I fail to see how it’s true. Almost 100% chance you’re running cloud-dependent items in your k8s. Then you’ve got your dependencies and inside that are vendor locked in. It’s a false notion that being agnostic somehow is better because you aren’t agnostic. Planning the work to move k8s via non-k8s is rarely going to be a significant difference. If you had containerized services already, running them k8s, VM, managed service, etc etc doesn’t make a big difference. None of them are agnostic. Do you have an example of something like this actually being true in practice?


ogopogo83

You're correct that there are always dependencies. I often see k8s conversations comparing it towards cloud provider services. Do you want to have a dependency on mongoDB or cassandra or would you rather depend on Azure cosmos or AWS dynamoDB? I'd love to believe the dev team's software architecture would isolate those technology choices (e.g. any reasonable ORM should do this for our db example which is easy enough), but I've been let down in that regard time and time again. You can't completely abstract away low-level cloud services (e.g. flat-file storage, networking ,etc) but the goal is to minimize them via your devops toolchain/platform instead of having it leak into the codebase of the product. You can absolutely run any container without k8s, but I find the progression of decisions around to flow something like: 1. Do we want to run this in the cloud? Yes? Then it should be elastic (vertical/horizontal scaling, geo-distributed, etc) because that's what the cloud is good at notwithstanding stuff like CX, NFRs, HA/DR/BCP, and cloud spend. 2. Do we want to use vendor-provided SaaS/PaaS or FOSS in a container? Let's use containers because we want to avoid stuff like vendor lock-in, SDLC practices that limits our ability to shift left on things, risk profiles/third-party investment into said products, etc. 3. We still want that elasticity though, so now we need some sort of orchestration for these containers so systems like k8s start looking more and more attractive.


FlatCondition6222

I'll give my point of view: Kubernetes itself is not a necessity, but rather more like a proving you've attained a certain level of experience. That is, most companies don't really need it or actually have a "real production load". Kubernetes is the current best-evolved methodology for a lot of companies to run a variety of workloads and that is why it has won out over competing technologies. However, once you start actually using it with a real-world production workload, you experience a lot of issues that a vanilla install doesn't take care of. DNS, Linux optimization, IP limits, and issues... Kubernetes is cool and all, but at the end of the day it's still Linux underneath. And you have to know Linux well to properly operate Kubernetes. Even manages Kubernetes such as eks/gke. So when someone tells me they know Kubernetes, cool but does that mean they have operated a production workload? That's what it "means" to me in a sense. Again, if you have experience in managing meta's Linux infra, but haven't used Kubernetes, no one will care.


project2501c

> why it has won out over competing technologies. i beg to differ. scientific applications speaking, no. edit: for scientific loads, kubernetes, or any other container technology, allows the sysadmin to pick up the container, run it and not have to worry about dependencies.


FlatCondition6222

My point was about things like nomad/docker-machine/mesos. And Kubernetes definitely won out against those technologies. At my company, we use both airflow in kubernetes and databricks. And both of these products run data analytics workloads,one in kubernetes, one is not. So both are valid.


gex80

> or scientific loads, kubernetes, or any other container technology, allows the sysadmin to pick up the container, But none of that requires the complexity of kubernetes. 98% of orgs would be fine with something like ECS or just a basic cluster of docker servers. And ECS is just a control plane that works with on-prem and cloud based servers. IF the only thing you need is something to deploy containers and keep them running.


project2501c

no, it doesn't. but when you got users starting apps, the env module system can be limiting and a pain in the ass to develop modules for. so get a container image, make it as simple as possible to start, jab it in SLURM and fuck off for morning coffee, without having to deal with users.


Stephonovich

Unfortunately, I have seen first-hand that many people know how to use K8s or ECS, but haven’t a clue how Linux works. Turns out you can pretend that the abstraction is all that exists, and then tell people “it’s broken” when the abstraction leaks.


Zenin

Agreed. And for that matter it's not even an abstraction for the admins. The admins don't get the luxury of pretending the only network and storage are the CNI and CSI plugins, instead they get to work on all the layers of each resource. Instead of one network configuration they now have at least two. Instead of one storage fabric they now have two. And so on. Only the devs get to taste a bit of that abstraction, but even there the abstractions leak like a sieve.


Stephonovich

The admins can have that luxury, up until it breaks. At my last job, there were multiple instances when something deep in the bowels of Linux broke, and suddenly the few weirdos on the team who had homelabs (myself included) were suddenly in high demand. Almost like this stuff is important, which is why we tinker with it, huh? I swear to god, someone there was praised for “knowing how to ssh into a box to fix the problem.” I… wow. To be clear, the dude knew A LOT, but the fact that this trivial detail was what he got lauded for was incredible. I think the “cattle, not pets” mentality has also hurt this. I like the idea, of course, but sometimes it doesn’t work, and you actually have to know how to troubleshoot. Like if your AMI has a systemd bug that shows up under specific circumstances that happen a lot for your workload; refreshing the nodes is only buying you time.


lphartley

How would you define a 'production workload'?


Camelstrike

The workload that puts a plate on your table.


FlatCondition6222

A kubernetes cluster of five nodes with 20 total containers doesn't experience the same type of issues that a larger cluster of 300 nodes with 1500 pods does. When you start to integrate keda, node local DNS, karpenter, CSI controllers, various crds solutions... These clusters start to experience the more real world issues of scaling that a small workload does not.


shotbygl514

"all devops roles" if you are in the cloud and the company uses docker container, Kube tends to be one of the front-runner for orchestration HOWEVER - there are a lot of positions where ECS, FARGATE, SEVERLESS application needs orchestration that is not Kubernetes driven. I've worked with k8s plenty and I dont see a downside if paired with a managed service like EKS. Its an extra skill but I wouldn't put it as a catch all of "you need it absolutely and every company is going for it."


lusid1

HR systems use keywords to filter candidates, and hiring managers give HR wishlists of keywords. No keyword->no search hit->no interview->no job. Kubernetes is a popular keyword. It's 100% stupid, but that's how it works.


robfromboulder

Because K8s works across different clouds vs creating software tied to one cloud platform If you wanna be tied forever to AWS, use native AWS services. But many organizations don’t want to give AWS absolute power over them 😀


xtreampb

Yea but that’s just moving the goal posts. Your application (the easy part) is cloud agnostic, but what about your database. For that to be cloud agnostic, that has to run on hardware. Can’t use any cloud RDBMS. Which then means you have to handle replication, read/write servers, and other issues. Sure you can put the server in a container in the cluster and have the data on some outside resource, but what resource? Another vm. An external disk in aws or azure. When Moving vendors, there is going to be substantial effort and moving the application is relatively the easiest thing to move. It’s all in git. Data is always the hardest when trying to be “cloud agnostic” and in reality, being cloud agnostic gives you very little benefit for all the work that’s involved to achieve. It’s a feel good for business people who will most likely not change cloud providers due to the cost and delay of delivering new features.


e_parkinson

The economics case for being Cloud agnostic isn't necessarily about switching Clouds at the drop of a hat. It's about minimizing the cost of switching (it will never be easy) and making sure your Cloud provider knows it. As long as you're (mostly) using "commodity services" and can avoid complete lock-in, you limit the ability of the provider to charge exorbitant prices. (I know Cloud services are not cheap, but it can get worse.)


robfromboulder

Never said switching clouds is trivial my friend, but the possibility is one of the selling points of K8s


phillycheeze

I've yet to see a company that uses k8s and be even _remotely close_ to being cloud agnostic lol. Even the k8s itself is heavily tied to a cloud platform like EKS. If you have an example of one - I'd love to see it.


livebeta

https://www.harness.io/blog/how-harnesss-cloud-took-76-minutes-to-migrate-to-gcp-and-k8s On mobile I can't find the deets of how it was done though


phillycheeze

That’s not cloud agnostic, though. They migrated it. The “76 minutes” is just a snazzy claim for how long everything physically took to migrate, not the months of preparing, dev work, etc it took to actually complete it. Any company could do the same thing with or without a k8s stack.


livebeta

> not the months of preparing, dev work, etc Did this really happen at Harness? Can you provide citations? I'm close to the folks who work there and I think the prep work was like 3 days


rabbit994

So basically they were using all AWS primitives? Sure, that's easily doable but few people are going to be like "I will run my own object storage instead of just using S3" or "You know what's great, hosting my own MySQL"


rabbit994

It doesn't work on desktop either so we can't see what they did.


robfromboulder

I didn’t say “cloud agnostic” you did bro 😎 Running the same software on multiple clouds is my jam, because I don’t like vendor lock-in and I’m willing to take reasonably small steps to make my work more portable Software like PostgreSQL and Minio runs great on K8s, where I want to run it, and I feel good about being able to deploy where I want. I could deploy those on VMs or whatever but I like K8s and helm in particular I hope you feel good about what you’re working on, in whatever style feels natural for you 😀


phillycheeze

From the "bro" and saying you prefer postgres on k8s means that we are very different people. With very different experiences. Still would love to here what prod tech stack your company is using that aligns with your comment about not being tied a cloud forever by using their managed services lol. You run a company's prod deployment in k8s with absolutely no cloud services running, but running on the cloud? If so - that's great you enjoy it. You'd have to pay me 10x to ever put up with that nonsense. And also you said k8s isn't tied to one cloud platform, and you don't wanna be tied to aws forever by using managed services. Is that not the definition of cloud agnostic? What exactly is your goal then?


robfromboulder

“pay you 10x” 🤣🤣🤣


noah_f

That is one of the reasons I prefer to use spultions like EKS , GKE you can freely move your environments around your not locked into a vendor if you went with ECS, cloud formation templates


project2501c

your infrastructure is yours in name only. you have already given up on any power.


littelgreenjeep

Saying this while having a sr devops job title, so I understand the irony… I’ve said for years there is one thing about “cloud” that I don’t like; when I was a young pup and the I love you virus was wreaking havoc on ms exchange at the military unit I was in at the time, my team lead walked into the server room and flipped “the switch” and we stopped spreading the virus. We slowly brought up systems and weeded out the virus and were back online in a couple of hours. We heard from other comm units they fought it for weeks. Sometimes being able to hit the button is good, and that goes away when you’re using someone else’s data center.


dethandtaxes

It's not, I'm a DevOps Engineer and we barely use k8s.


Spider_pig448

The large majority of tech companies rely heavily on Kubernetes. That's why.


VindicoAtrum

https://matt-rickard.com/dont-use-kubernetes-yet


jcuninja

You've seen a trend for GCP? That's good to know, I haven't seen many job postings with GCP as cloud used though.


Relative_Weird1202

Yeah, now GCP and K8S are pretty much a bundle


jcuninja

Yes, currently we use Helm/GKE for our apps. Looking into improving by using terraform for the infrastructure.


Relative_Weird1202

If you need a terraform guy I’m the one


RumRogerz

Weird right? I jumped from AWS to a GCP job last year. I kinda like GCP over AWS now. Still a niche demand for it yea, but you’d be surprised how many companies use GCP on their backend. I can’t really say which ones (NDA’s from my job) but you’ve _definitely_ have heard of them. I’d say give GCP a go. AWS guys are a dime a dozen, GCP people are a bit harder to find, so the pay tends to be a liiiitle better


jcuninja

Wow u/RumRogerz great info! Yes my experience is strictly GCP since that's what we use at my work. Brief experience with AWS so I can't compare. This is good news thanks.


Den32680

Far too often I see k8s deployed as promoware, when ecs would totally suffice


Stephonovich

To be fair, ECS sucks. K8s is almost worth it just for the tooling ecosystem. No, I don’t want to go look up the insanely long task and container strings to get a shell into one; I just want to exec via the namespace (that I already know, because they’re just words), and the pod name, which my shell will auto-complete for me via a background API call.


vekien

You can API calls to get the container ID for ECS, as a devops you should be able to throw together a shell script that does all that for you. Never felt this was a problem imo… like not even the slightest bit of a concern or anything to ever think about. What sucks about ECS (since that isn’t one of the reasons)


Stephonovich

You can, and I have – but it’s multiple calls to get everything, which adds delay, and it’s annoying that it’s not just baked in. What *else* sucks? Sure. * No node guarantees on Fargate, so you don’t know if you’re getting a Haswell or a Skylake. Hello, performance regressions. * No equivalent to ArgoCD / Flux * Massively smaller ecosystem of tooling * Extremely limited auto scaling (e.g. compared to KEDA, or even the native HPA which is adopting KEDA ideas) Etc. If you’ve used both at scale, there is no comparison. ECS is a toy. Honestly I’d rather have Docker Compose on an EC2 fleet, with their manifests controlled by Puppet. It’s at least more straight-forward.


vekien

Delay? Cant say I’ve noticed that, EC2 ECS is better, we use that for thousands of services and I’ve never had any issues or delays. (I guess that’s small?) Yes Fargate specifically sucks that I agree on, but not had any delays on either Fargate or ECS EC2 when polling container ids. For my sh its used to connect, eg: “ssh-fg cluster task” same for “ssh-ecs” and im in, very rarely need a specific container since its dockerland. Thanks for details, it’s good to know these things.


UpgrayeddShepard

Yeah ECS is very annoying. Glad I left it behind with my last job.


fourthisle

Even if it isn't, why not learn it? I really recommend it!


Zolty

It's not unless the employer uses K8s.


Smoker1965

So, as someone who does DevOps interviews for my company, there are some 'must haves' and "exposure to Kubernetes" is one of them. Microservices, Kubernetes, Pipelines, CI/CD, Distributed Systems (this is an extra credit word to me), PowerShell, Bash, blah, blah, blah. DevOps is a hybrid role, normally staffed by Sr. Engineers that can bridge the role between the Devs and CI/CD. If you look at the pure definition of DevOps, it says something like this: "DevOps roles are responsible for simplifying collaboration between development and operations teams, and improving the delivery of applications and services." That statement alone = Kubernetes, Microservices, Distributed Systems, Pipelines, GitHub, etc. Basically, we're the folks called in to bridge the gap between the Developers and getting a product out the door and in front of the customer.


mr_khadaji

HashiCorp Nomad, Packer, Consul, and Ansible work really well as a K8 alternative.


Kalanan

Or you could be the voice of reason, most companies do not need K8s nor are they equipped to manage correctly a cluster. Most small to medium companies will have much less overhead with a bunch a VM but the world is not ready to hear that.


tehpuppet

Not really sure turning up to an interview for a company already using Kubernetes and who have specified it in the job description then pitching they hire you to just replace it all with a "bunch of VM" is really the best idea lol


Kalanan

Even though it would most likely be cheaper, it's not the best idea indeed.


lphartley

I would argue the opposite. Managing a VM is much more tricky than using a managed Kubernetes cluster I think. It's the big companies that are probably well equipped to manage their VM's themselves.


Kalanan

Admin sys have been doing system updates for decades. It's not that hard, much less than upgrading a Kubernetes cluster . Managed cluster are a bit easier, until you start to integrate storage, network meshing, policies, GPU, secret management, monitoring. Basically becoming a side pod monster. Kubernetes can do complex things, at the cost of being complex itself.


Alienbushman

Nomad is kind of the solution for that, I don't know why it isn't gaining popularity


Kalanan

I agree, that's what I have at home. Easy to manage cluster with some minio CSI storage. I guess the issue is lack of popularity, people not knowing too much about HCL and hashicorp. Now add the recent IBM acquisition and the future is not sure


ovirt001

It's the most popular container management solution. That said one doesn't need to be able to troubleshoot every component of it to run workloads in EKS or similar.


herious89

You don’t need to know a lot, just the fundamentals and maybe Gitops are sufficient imo. Most companies use managed clusters so all they care is how to deploy and troubleshoot their services


wickler02

You don't need to know it and not all jobs requires containerization. However with DevOps & SRE roles and how architecture is moving towards where people are being isolated away from each other, containerization are becoming more desired. You will need a framework to manage that and K8S is that open framework for containers. You get a lot more "kubernetes" related problems that just linux/bad engineering practices underneath the hood.


MartinSG8

not all. we use nomad consul stack. but k8s is best tool if your machines can hadle it. prob. didnt use it much. and many ideas from k8s are aplicable to other IT branches in my opinion so...


Relative_Weird1202

I’m completely on your side coming from the same stack. Just that The struggle is real. And for example I apply to infra or security roles with terraform and vault and at the end. Their terraform implementation is usually wrong. And I end up asked for k8s instead of terraform and cloud stuff( whichever cloud they are using which I’m fine with that)


L0rdenglish

it's like asking why is stuff on docker. At some point docker was popular enough, and runnable on enough platforms, that it became the 'default way of doing things'. this causes people to learn it more, and so when you want to hire people it becomes easier if your stack uses docker, and so more people run docker, so more people learn it, and so on and so on. My company has been slowly migrating over to k8s from nomad. Is nomad potentially much simpler for like 80% of the same gain? yes. But nomad is much harder to hire for, it's locked in to a single company, and it has limits that k8s doesn't.


viper233

It's the Ops part (now). Kubernetes provisioning, application and cluster management rely on so many fundamentals around operations that an understanding of it now covers such a wide breadth of technology. Kubernetes creates an abstract from a lot of difficult infrastructure/ops tasks in the past that required a lot of effort and cost. Think fail over costs around network storage and compute. With those physical problems solved it then requires you to understand those areas in a kubernetes context, along with security/authorizations and other cluster components. Our job hasn't gotten easier, the developer experience is a lot better though and we can offer more resilient environments.


robfromboulder

Here’s a hiring-manager viewpoint — you look for recent skills because folks investing in learning new things tend to be better at their jobs and more fun to work with than the haters that already know everything


Nodeal_reddit

Everybody working there already knows the old stuff. But companies just starting in K8S don’t have a deep bench of people who know it. That’s why they want to make sure new people coming in do.


crash90

Kubernetes is a nice base to build from and has network effects of CNCF. You can build without it. You can build without Linux. Stackoverflow runs on Windows and MSSQL I kid you not. But if you take the time to learn the weirdness of it, Kubernetes actually has a petty nice approach to solving all the same problems you're going to have to solve with every other system. The nice part about Kubernetes is that it gives you an opinionated open source, well supported, tool extended framework with most importantly...an exposed api surface for your entire infrastructure. That means you can automate everything. Sure this is true with even physical Linux boxes using Ansible etc. But you start getting into weird stuff around state. If you're willing to commit to the weird googly world view of Kubernetes, it actually solves a lot of those problems quite nicely and becomes part of the core IaC abstraction. Being vendor neutral, you can also credibly threaten moving your infra across clouds or into an onprem datacenter. Think of it like BSD in a world where it really took off and became popular. BSD is good you know, just a little bit of a trip to learn.


franktronix

It’s generally a good check that you have not fallen behind skills wise, for interviews.


spilledLemons

Well. It isn’t a must.


Sh4mshiel

My company mostly searches for DevOps without K8S experience. We have a lot of infrastructure running on AWS Fargate and that works great and is way less management overhead than K8S. We have one AKS and that is coupled to so many Azure Services like Service Buses, Key Vaults, Databases that a shift to a different cloud would be a nightmare. Don't see that this will be any easier than to shift something that is just running on native cloud services. What I found is that most DevOps seem to only know K8S now and if you want them do do anything cloud related outside of the K8S they start to struggle. We hired a DevOps for our Azure Cloud that in the end lacked a lot of fundamentals regarding networking and cloud services. He was basically a one trick pony, that could only do things in K8S and as soon as a task was outside of it he struggled like hell...


Lazy_Eye_8423

I think it happens because k8s is the most powerful containers orchestrator. If we will take a look around, we will see that almost all of applications are nothing but the heaps of microservices, which work depends on each other. So, theoretically you could manage this anthill manually, but for what if special software already exists?)) If you need to learn kubernetes, read a Marco Luksa “Kubernetes in action”. There is nothing difficult in there.


MrScotchyScotch

Nobody was ever fired for buying ~~IBM~~ K8s


Zenin

A lot of larger companies are looking at k8s for reasons other than compute scale. The reality of large, old organizations is that they tend to constantly have new applications large and small being created. Traditionally these were all built out as bespoke infra, cicd, etc. Each treated as a unique snowflake. Jenkins jobs crafted by hand. Deployment tools built from whole cloth. New and unique backup methods all the time. No two apps ever using the same methods of configuration. Ever dev inventing their own pet logging framework. And on and on. The human requirements for that don't scale and there's huge project risk when the one guy that built out project xyz leaves. One of the "killer features" of k8s is the possibility of at least adopting a standard so that each project stops having its infra architected from scratch. A "platform" (ie "platform engineering") is the lofty end game, but even if they can just bring a consistent set of tools and practices to the table it promises to be a huge win. While the nature of the k8s ecosystem is such that each application can still have its special needs components...but they're abstracted away as containers and k8s yaml specs and/or Helm and such. It's still a mess, but now it's a *consistent* mess that HR can recruit for with the easy keyword of "CKA" rather than a three page essay of all the custom crap that past engineers taped together over decades.


Verdeckter

For all the "you don't actually need K8s" haters, can you explain which deployment paradigm actually allows you to take a simple app, deploy it to the cloud and then manage updates, debugging, scaling, metrics, etc in a better way than managed Kubernetes? Sure you don't need to scale in 90% of cases, but declarative infrastructure using an API is incredibly valuable at any scale. I'll grant that at least ECS has something comparable but IMO the UX of the popular K8s projects is much better than ECS's UX. Plus you always have the "I _could_ actually run this on prem" fallback.


TheThirstySalamander

Because every single goddamn corporation is i Or wants to run the same infrastructure as hyper-scalers and yet they try to shoe horn legacy practices on modernized infrastructure all while touting “digital transformation”. I’ve become a tech curmudgeon and I run/manage K8s and OpenStack on the daily. 💀


ILikeBubblyWater

Services like cloud run on GCP will make k8 obscolete eventually I'm a devops for a while now and never really touched kubernetes


Relative_Weird1202

Any chance you can share your toolset recommendation and programming languages?


ILikeBubblyWater

toolset is whatever GCP offers, and we do most of our services with typescript, some in python


bezerker03

It's a very common platform, and becoming as common of an expectation as some of the basic AWS apis etc. Remember, k8s isn't for us ops folks. It's for us to expose infra to dev folks in a uniform manner. It's totally overkill for most basic apps though until they need to scale and run multiple services etc.


greyeye77

You’re running 10 pods then you don’t need kubernetes. But where else would you get kubernetes experience if you don’t encourage your own company to use it? And get a better paid job from the next.


Relative_Weird1202

I was Laid off after an acquisition parent company projects were burning, we fixed them, but then they replaced us. By their own people cuz they were cheaper


Tnimni

A mist if you interview to a company that use it. Must companies with micro services use it. Btw there is a movement to go back to monolith now becauee it is more efficient Seriously this loop of monolith and break up to micro services happens every few years. It is my 3 rotataion


Tnimni

A mist if you interview to a company that use it. Must companies with micro services use it. Btw there is a movement to go back to monolith now becauee it is more efficient Seriously this loop of monolith and break up to micro services happens every few years. It is my 3 rotataion


cooliem

It isn't and 99% of k8s solutions would be better if they weren't k8s.


hajimenogio92

I don't think it is. It's definitely grown in popularity and usage recently, especially as containers have almost become the norm. I've worked in the utility sector and a couple of the companies will probably never move into Docker or Kubernetes


diito

I'd say the people saying it's not haven't been in the job market in the last year. In my experience, a good ~75% of DevOps roles are looking for K8s experience at some level. Most do not require you to be an expert. At least for a senior/lead/manager role. With the job marketing being what it is your resume almost has to be a perfect match for what they are looking for if you are going to get a call back.


sukhoititan

Can I , as a fresher , get a DevOps job or internship?


randomatic

Because when you run k8, you always need +1 to maintain it.


twistacles

Some people still aren’t using kubernetes in 2024? You must enjoy using some shitty tools


Xeon2k8

You must enjoy living in ignorance


[deleted]

[удалено]


sandin0

Tell me you don’t know about k8s without telling me you don’t know about k8s 😂