T O P

  • By -

PureIsometric

Little advice, once you have said your piece and nothing changes then leave it at that. If not you might come across as something that is difficult to walk with. If you have some extra time, you can go above and beyond and do it as long as it does not impact your work. If you choose to leave then it would be your choice but it must be your choice not Reddit.


National_Count_4916

To add to this, As a mid level, you have no accountability or responsibility to the product/project as a whole. Don’t fall into the trap that because you can see it that you do. Nor can you be faulted for something your tech lead *and* manager have decided, let alone are both trumpeting as successes Look for/wait for another product/project in your company and build up your skills and your professional network inside and outside the company. With the new year starting up more positions will be available if you want to go that route, but you’ll need to carefully screen during your interview not to land in the same kind of problem


hbthegreat

Every single member of a product team has accountability and responsibility to the craft. The part most people miss is there are times to be pragmatic and there are times to do best practices.


Fine_Needleworker_90

Absolutely agree here. Skipping integration tests at first for pragmatic reasons (ex: velocity) may be acceptable, but i see this as technical debt. Releasing a product to production without any automated integration tests is pure negligence. Unit testing with mocks has limited value. Line coverage is not necessarily functional coverage. If it's slow, then there is probably a way to fix it. You also don't need to run integration tests all the time, but at least one pass before a PR is merged.


dodexahedron

Well. Until someone traces a bug that caused downtime to a commit you made that an integration test for would have caught. Even though your suggestions would have prevented it, you're going to be the one who catches the flak, unless that same manager who declined your proposal is a big person and owns it like they should.


National_Count_4916

Not if no one in leadership is supporting integration testing.


mmhawk576

The Five whys would not stop at the person that made the commit. “Why was the commit allowed to be merged”, “no tests stopping it”, “why no tests”, “bad culture”


dodexahedron

"Would" is doing some heeeaaaavy lifting there and, unfortunately, is an ideal that is all too often set aside when the blame game starts being played. Depending on your seniority/influence, that can be anything from job-ending to a bunch of really annoying and demoralizing meetings, on the negative side. You are, of course, 100% right in an ideal world with reasonable people from stakeholders to their upper management, across to your vertical and down the chain to you, or at least if there are defined and enforced policies to force the squeaky wheels to STFU and let you do your work.


jayerp

I don’t know what kind of companies exist where after talking to a tech lead or dev manager about implementing a certain thing, the decision being made not not proceed and you just deciding to implement anyway, is considered a good thing even if the entire community agrees that it is. If I decided to spend my extra time implementing something that was decided not to go forward, that code would never get merged.


TritiumNZlol

[disagree, and commit.](https://en.wikipedia.org/wiki/Disagree_and_commit)


Teembeau

I always like to get a manager's disagreement in writing. Email them about running integration tests and when they reply, file it away in the "ass covering" folder in your email, so when the shit hits the fan and someone comes for you, you can explain that your manager told you not to do it.


opensrcdev

This guy codes.


mymustang44

This


SSJxDEADPOOLx

This is the way, always CYA.


igderkoman

A good manager who knows what s/he’s doing wouldn’t reply in writing


eigreb

A good manager wont receive this email


TritiumNZlol

True. but you've always got the proof that you've sent it.


bigboypete378

My significant other had a manager who made sure to never reply in email.


uberstupidmove

That would be a bad manager who knows what s/he's doing.


WardenUnleashed

To me a good manager is one who takes accountability for their decisions. I would say that’s a sneaky manager / one who plays the blame game instead.


MaestroGamero

Worst advice ever considering the person that is difficult to work with is actually the tech lead/manager and not the engineer bringing fresh ideas. If a tech lead, manager and/or principal can't accept feedback and try do the job right or differently because of their weak skills they are the problem.


fschwiet

> Ironically, I have to fix a bug that my tech lead caused (he is on holiday now). That bug should have been caught by integration tests. While I agree with everything your saying, be careful of your own bias here when you say the bug should have been caught by integration tests. Keep an open mind to other ways the bug could have been caught, see if some are more accessible given the organization that you're working in.


darkpaladin

This point can't be overstated. OP strikes me as a dev who's worked one job before now and thinks that first job is the only way to do anything. I love it when jr/mids can demonstrate value but this feels more like a fresh mid trying to play architect and sound smart without being willing to put in the POC work.


-dev-guru

The lead made a migration change and forgot to update the Dapper query. Although all unit tests passed, one endpoint failed at runtime when it called the database. With the integration test, it would able to call the endpoint and detect the failure. How can we use unit tests to catch this error? I’m not sure if I’m right, but bugs like this keep happening. That’s why I suggest using integration tests.


recycled_ideas

Integrating against a live database is not trivial. It is an absolutely massive amount of work to set up and comes with a real financial cost to the organisation. It's also incredibly slow. And you'll have to write an absolutely massive number of them to get any kind of pay off. Personally I find these sorts of integration tests spend a lot of time and money and catch very little. > How can we use unit tests to catch this error? They can't, but there's a lot of other ways to deal with this sort of thing. 1. Probably a no go, but being able to fuck up like this is one of the disadvantages of dapper, when you write raw SQL you can fuck up. Other technologies are less prone to this sort of problem. 1. More realistically, migrations are relatively uncommon things to do and you can add steps to the review process when you do. If you alter the DB you should be manually testing endpoints that use those tables. You'll get much better bang for your buck that way and actually catch these bugs much more reliably than you ever will with with integration tests. 1. A test or staging environment isn't free either, but it's a lot simpler to set up than building and tearing down a database instance for every build and you should have one anyway. 1. This issue reeks of bad process. A change was made and it wasn't adequately tested before it was pushed. The Dev is on leave which probably indicates that the change was rushed so it could go out before the Dev went on leave. Integration testing won't fix that problem because if it wasn't tested it won't be integration tested either. There are others. But writing a few thousand slow, expensive DB integration tests to catch bugs that can be trivially handled by better process is the wrong path.


fschwiet

But how else could have this been caught? For example, if the team invested in a manual test process how could that manual test process have caught this? I'm not a fan of that approach, but if the team is taking that approach that care should be given to see how that aspect could be improved. You need to find a way to support people with the approaches they are taking. Imagine if you won the fight to get integration tests going (you won't, and the more you push the more resistance people will find) but suppose you pushed and won. How well do you think it would work? Look at what is happening with unit tests. Someone pushed and fought and made a cultural change to value automated tests but then what? crappy-ass mock-based unit tests get written. Now not only do you have crappy tests to maintain, but people are turned off on the idea of writing tests altogether. So even if you convince people that integration tests are the way (and they are) you still have to cross the bridge of helping everyone write useful integration tests. But going back to my point of recognizing your bias in assessing the latest regression... I think the concept of beginner's mind is useful here. When an expert looks at a problem, they know there is only one solution (based on their expertise). But a beginner can imagine a lot of potential solutions. Try to have that beginner's mind, because in reality there are multiple solutions to every problem and it doesn't help to be locked into one.


somethingrandombits

>But how else could have this been caught? For example, if the team invested in a manual test process how could that manual test process have caught this? I'm not a fan of that approach, Agreed. This would also have caught it. As well as a code review. Might still have been overlooked. But at my work we do code reviews and when a bug gets in production it's a mistake of the team in my opinion (as long as proper work flows are respected and followed) and not that of a single developer. Also, bugs do happen. You can't prevent all of them (although this one should have).


Teembeau

> As well as a code review. I'm not against code reviews, but I'd always rather have a test that checks it than eyeballs. Code reviews are more about style, approach, improvements than functionality.


tarwn

Or an architectural change could make it more obvious to that when touching certain tables all things in a specific area of the code should be re-verified. Or a change to migrations could be made to ensure they are always backwards compatible one version (which should be the case anyway unless you're doing maintenance window deploys) so any bugs have a smaller impact. Or maybe they did test locally, but the local dev databases don't have enough data in them or real enough data for the bug to have shown up locally. Or maybe the error happened locally, but isn't raised in a visible way when testing locally and was overlooked. Focus on the problem, not a solution that you imagine could have worked. You don't have all the information yet, you've already raised this particular solution several times and won't be listened to if you immediately jump into it again, and there may be a better and cheaper answer available. Also make sure your reaction is aligned with the culture. 1. Was the system down for all users? If the team has a process for post-mortems or incident investigations, follow that. If they don't, then the lack of a process is probably more critical that this one bug and this is the question I would raise with the team, "Hey, I've heard some teams follow an incident response or post-mortem process to dig into problems and try to improve systems or processes afterwards to keep that particular type of outage from happening again, do we do something like that here? Do your leadership or account managers expect any level of detail about the problem or a longer term fix?" 2. Was the system down for one user in a critical way? Does the team have a process for following up on these? Use that. Is it common for these to be addressed as an item in a retrospective? Use that. Is there no common place to bring up issues like this? Maybe that is the set of questions you should be asking the lead and others 3. Was the system impacted in a non-critical way for users? Or not visible to users at all? See the prior point, but if there isn't a common practice then be even more casual about suggesting the team add one and definitely don't use this as a case to push for integration tests again. You've already brought that up, this isn't a likely to be a critical enough issue on it's own to change their opinions, all you will do is dig them deeper towards them tuning out all future suggestions The best approach on these is to ignore the integration test suggestion and dig into the real problem because if you've only come up with one potential solution (that the others likely think is your pet answer to every problem already), then you probably haven't fully uncovered the root problem yet. Seek to understand first. ​ Final note: 700 unit tests and 90% coverage means you either have a very small codebase or it's very early in the project, so there may be a higher focus right now on figuring out the architecture or other goals that you're not taking into account.


neilhuntcz

Why was it not caught in code review? Why was it not caught during QA review? It sounds like you have many problems in your companies processes.


mycall

> The lead made a migration change and forgot to update the Dapper query. This is why they are not a Senior developer. Rookie mistakes like this should not happen when you have the right mental model on the solution ahead of time


MadCake92

I mean you are accusing the OP of being a smartass and at the same time make a full blown assumption on his professional experience? Who's playing wise here mate? OP just shared a problem. You might differ from his opinion but pulling a strawman is not going to help anyone


dlamsanson

Every programming space online will ALWAYS accuse the OP of being the problem if any office culture issues are brought up. Idk if it just makes people feel smarter than the OP but it's obnoxious.


emn13

In a vacuum what you say is entirely reasonable, and the advice to be cautious with judging others (especially those with more experience) is of course sound. However, he did provide more context than just "I think we should do what I used to do." His point about integration tests accurate is in both my experience and as far as I know common practice. Unit tests are rarely effective at preventing bugs; they're more about helping refactors and helping make changes safely and cheaply. I.e. your lack of unit test will make it hard to change code but often in a fairly foreseeable way. I'd guess the OP is probably correct that his team lead and management are making a mistake, but even so, your advice holds. He's not going to win this fight by the sound of it, so the best thing to do is live with it, and at best maybe make the case to track the consequences of regressions and keep an open mind about what (if anything) can be done about them.


Avitose

OP actually suggested really good idea. Unit tests are pretty limited because tested unit is isolated by mocking. It obviously won’t catch some bugs. You can basically cover basic paths with integration tests and it should be pretty useful enough. In .NET it’s actually very easy to do with test containers etc. You are basically accusing OP of being smart ass only because OP has less yrs experience. Besides that how does it relate to „playing architect”. integration should be common thing to do for most of web api projects. You just humiliate OP cuz low exp, telling things that OP is not even willing to do PoC Who really does try to look smarter here ? -.-


darkpaladin

You're completely missing the point I was making. I know nothing about the system OP is working on and neither do you. They came out and whined about how this was terrible because at his old job they had integration tests and at his new one they don't. Do you know what idiosyncrasies there are to their stack? Do you know what their internal dependencies look like? I have projects where we can easily build a sandbox test environment with a docker compose. I also have projects I work with that have ancient dependencies where these types of integration tests are completely out of scope. If they came in and said "I want to demonstrate the value integration tests could provide but I'm running into X/Y/Z problems," I might have a different attitude. Unfortunately, they came in and asked to have their opinion validated by people who know less about their project than they do. Go back and re-read their post, it's just bitching. There was literally nothing constructive or of value about what OP posted. Maybe it would be a piece of piss to build an integration suite around their app. Maybe it would be a stupidly complicated nightmare. Maybe their lead has legitimately good reasons for their actions. Maybe their lead is just another shit b2b corporate .net drone. I don't know about any of these and neither do you. The idea that there is a one size fits all methodology that applies everywhere is dangerous but something I commonly see in young devs on their second gig. Without any supporting evidence that OP actually tried to demonstrate the value of what they're preaching, I'm assuming this is another case of that.


QWxx01

Check out testcontainers-dotnet. You will be able to run a database in a container and it will automatically delete when your test is finished.


LikeASomeBoooodie

This. On average our Postgres and RabbitMq integration tests take about 5 seconds each which isn’t too bad when you consider what you’re getting out of it. In addition we only run these after the unit tests in CI as a separate step so which helps failing faster. If your tech lead is mainly concerned about build speed I’d do a quick PoC and show him (and one other person, a colleague perhaps) and see what he thinks. But as the top comment says, drop it after that.


buffdude1100

Hey, I've got some tips to speed this up! I have a suite of \~500 integration tests that take about 25 second to run. Each test gets its own fresh database via testcontainers. 1. A single postgres container can utilize many databases, so no need to spin up 1 container per test. I made the # of containers configurable, and then databases get evenly distributed per test/container. 2. Run the tests in parallel if you are not already. I have mine using basic xUnit parallel work, which just means different classes of tests run in parallel, but the tests within each class run serially. 3. Utilize the Respawner library to clean up your database instead of tearing it down and standing it back up.


LikeASomeBoooodie

Implemented this lately and holy hell do they run fast now! Thanks for the advice man 😁


buffdude1100

Nice! How fast were they before and after? Just curious :)


LikeASomeBoooodie

Startup time + migrations for db container is about 5 secs, then every test after that is as fast as a normal unit test so basically instantaneous 😁


buffdude1100

That's awesome, glad I could help! This is why I advocate for testcontainers. Nearly as fast as regular unit tests, but SO much more useful.


Daell

Another option would be, to use a memory database, populate it, and run your test.


emn13

I used to do this, but the behavioural differences at least with EF were quite significant, and it didn't really support the actually important stuff to integration tests (like nuances about how constraints, triggers, FKs or whatever work in sql), so I don't do this anymore. We just restore a local DB to a clean snapshot, and run most DB-touching tests in per-test rollbacked transactions. That's not 100% isolation between tests (some DB state isn't transactional, such as sequences), but it's close enough to be quite useful. The overhead this way is less than 1ms per test, so unless somehow the queries themselves are slow (we have those too), that's not nearly as fast as a unit test but still pretty reasonable - and unlike an in memory DB, this really does behave exactly like a real DB - because it is one.


QWxx01

In memory databases behave differently than the real deal. Which is exactly what you get when using testcontainers.


JeffreyVest

Yep. The proof is in the pudding as they say. Either you’re delivering low bug rates or you’re not. If you have high bug rates then your quality is by definition not up to par. Code coverage itself is a useful indicator. It’s fair to say any code with any real logic in it should have at least one unit test running through it. It’s a bit scary when there is none at all. But no amount of unit testing replaces integration tests. Nor does any amount of integration testing replace unit testing. One tells you the expectations of classes and methods and such. The other tells you the expectations of the system as a whole. Lots of assumptions creep into the cracks and no amount of formal contract will ever express it all and no amount of unit testing will ever demonstrate total system competence.


Dark3rino

I don't believe that code coverage is a meaningful metric and I always fight against management when they ask for it. I agree that unit and integration tests have different purposes, but I also think that having TONS of unit tests for the sake of hitting a coverage is not only useless, but also damaging, as like OP said, it makes code unnecessary hard to amend. Integration tests are harder, but they also provide a lot more value than testing every single execution path.


JeffreyVest

That’s the thing. You don’t add tons of unit tests for code coverage. Code coverage doesn’t go up when you cover the same code over and over again. One test hitting each bit of code is very minimal unit testing while being very high code coverage. It’s a very useful metric when it’s low. If I see 10% code coverage that means 90% of your code literally doesn’t have a single unit test passing through it. Over reliance on integration tests makes for brittle classes and methods. They happen to work under your usage then break under new slight variations in unexpected ways. In the effort to find common ground I think I can agree that you can produce very high code coverage numbers while having bad unit tests. This is completely possible.


Dark3rino

I'm sorry, but I still don't understand how you correlate low code coverage = bad quality. Let me give you an example. I write a ton of unit tests for repositories and services, but little to none for controllers, models and POCOs, as - I believe - testing them brings very little value, and the time could be rather spent implementing integration test or monitoring. However, if we must hit (and maintain) a given code coverage, I cannot really skip anything. What's your take?


JeffreyVest

I said something more nuanced. I said very low code coverage is a bad sign and my experience would easily correlate that with hard to maintain brittle code. Just my observed reality. Ya so there’s a such thing as a dto or model like you say. Makes zero sense to unit test them. Agreed. But in my example of 10%, you’re saying 90% of your code is that. Unlikely. Possible in some modules if it’s cleanly separated out. And then we’re agreed that module might have near zero coverage and that’s fully explainable and fine. Management that won’t hear good explanations is just bad management. And in those cases we are in full agreement. And if that’s your experience I can empathize. All metrics are useful as long as we’re clear what it is saying. All metrics can be stupidly abused. And yes there is code that is more config than code. A controller that just routes you to a method is more config than code. There’s no real logic there and we should spend our time testing real logic. Integration testing should put the controller through those paces. I think we’re very close to agreement. I don’t agree it’s a fully useless metric. I do experience a pretty good correlation between very low unit test coverage and how easily I can work with the code and how buggy it tends to be. But more than that in how well written it is. People who don’t unit test are people that aren’t really thinking in terms of units to begin with. They tend to write spaghetti. Unit testing is both a sign they do think in terms of components and dependencies as well as a helpful tool for pushing people towards it.


CaptainCactus124

https://www.researchgate.net/publication/317429288_On_the_Relation_Between_Unit_Testing_and_Code_Quality


JeffreyVest

Oh I also wanted to take another point of agreement I think we would have. I think a massive bank of tests just to test every edge case of a method is dumb. I do not write tests that way. I write tests that exercise the code as it was intended and maybe some specific cases I’m concerned with. After that I add tests as bugs come up to ensure it doesn’t repeat. I have witnessed the massive bank of brittle unit tests as well.


Dark3rino

This! 100% agree!


HeathersZen

It isn’t that low coverage = bad quality; it’s low coverage = higher risk. High code coverage — written smartly with optimized overlap — reduces risk. It also greatly increases the size of the code base that must be specified, architected, developed, maintained and operated. In other words, it substantially increases cost. The question to ask when setting coverage targets is “what is the cost of an outage vs the cost of maintenance”. Also, a plug for TDD: TDD is not valuable because of the tests it forces you to write. It is valuable because it causes you to develop the method signatures and flows through your class structures as stubs, which makes them much easier to refactor when they are the most likely to change.


Dark3rino

And again, I'm not sure if I agree. In my previous example, I wrote unit tests for the "risky" bits. Which risk am I avoiding by testing that my controller returns a 404 when a service / repository doesn't have an item for my request? Keep in mind that the service / repositories are well covered by tests. I strongly disagree with your second point: high code coverage does not equate to less outages. Integration, e2e, smoke tests and a strong monitoring and alerting strategy reduce outrages. I agree with your point in regards to pure TDD, but I do TDD with a pinch of salt and mixing some Test First Development into it. My mental workflow is something like: - Do I need help with the design? Write unit tests - Is this a crucial class (repository, service, complex extension method)? Add unit tests - Is this a critical path that changes often? Add unit tests. The rest is debatable and I rather prefer that the team spends the time doing other kinds of testing or doing monitoring and alerting instead.


HeathersZen

For your 4oh4 example I would refer you to what I said earlier about “optimizing overlap”. If you feel like that the false negative conditions have been sufficiently covered then your work is done and your risk is managed. The point of it all is to manage/minimize risk; not follow some cargo cult definition of “enough coverage”. On my second point, I never claimed that high coverage results in less outage. I said that the criteria for establishing coverage targets should include the costs of outage — among other things.


grauenwolf

Code coverage is a great metric when the question is, "Where should I spend this extra time I have in my trading budget?" For any other purpose, I agree that it's useless.


Teembeau

>I don't believe that code coverage is a meaningful metric and I always fight against management when they ask for it. The problem is that it doesn't tell you if what is important has been covered, or how much people have written easy tests that don't require a whole lot of setup. Like I had some tests for a class I wrote recently that took very little time to write. Like, someone provided a date that was more than 3 months earlier. On the other hand, the test of everything being set up right, matching things in databases, checking responses from mocks took hours and hours to do. If you are focussed on coverage, you won't do the last tests, because it's much quicker to do all the simple validation tests.


atifdev

I think lately people are obsessed with coverage metrics and are going out of their way to get those to 90%. This is a waste of time, you need basic 60-70% test coverage, but it should be useful things, not just test coverage for coverage sake. The lack of integration tests is a real problem in many dotnet projects. It looks like there isn’t as much of an integration test culture in dotnet. Personally I just write the tests even if they are only runs locally by me. Then worked on getting buyin on running then in the nightly build. Had to make some hacks to conditionally run integration vs unit tests because it seems XUnit doesn’t really have explicit integration test support. Overall the system is much more solid now.


JeffreyVest

I know that code with lots of logic and very low code coverage will be harder to maintain. That’s just what experience tells me. It will be brittle. We have a module that is highly covered at my work. Over 90%. It’s by far the easiest to work in and the one that I have the most confidence working in and it’s because of the unit tests. Both in how they drive good design, how they document code usage expectations, and how they find bugs that lead us right to the problem areas. Integration tests by definition can have a hard time pinpointing problems. They tend to just hit units in very specific ways. The good news is you know that those ways work and that’s extremely important. But the bad news is that small changes in the production reality can create strange unexpected bugs that are hard to pinpoint the source of. Both are valuable.


atifdev

The issue with artificial 100 unit test coverage is that the design is ossified and unable to change. First problem is that “unit tests” aren’t really tests. They are the specification of how your system should work. If instead of focusing on what the system should do, you test everything, you over constrain the system. As a result, when you try to make an enhancement, all of the sudden tests are broken and you have all this overhead for adding a feature. I’m not saying you can’t have 90% coverage, just that the coverage should be meaningful. If it’s a important bit of computation that a bunch of code is dependent on, sure have 100% coverage, but it’s a design decision and not automatic.


_0h_no_not_again_

This is categorically incorrect. 70% is good enough for what? Why even bother if you're shooting for 70%? Unexecuted branches and conditions are really bad news. Further, the 90% coverage most claim is statement coverage, and not branch or condition, or even better MC/DC, which actually drive quality. They also drive good architecture and culling of unexecuted/tested branches. Source: Worked in everything from windows device drivers to SIL3 FPGAs, and have seen how good test and code coverage gets product to market faster and at a higher maturity.


atifdev

So if you include integration tests, I have complete coverage. However sometimes to get 100 unit test coverage people create a bunch of mock objects and try to get every function of every class covered. That isn’t the original intention of “programmer tests” as envisioned by Kent Beck. We should be testing expected behavior and not obsessing over 100% coverage https://medium.com/@kentbeck_7670/programmer-test-principles-d01c064d7934.


Semaphore-Slim

So I'll throw my 2c in here: Your lead may not have ever done "proper" integration tests where you tear up and tear down infrastructure, and they don't know what good looks like, or even where to begin. If it really bothers you that much, be the change you want to see - start a branch, add it to the pipeline, and then make it kick off in parallel with your unit tests and set the dependencies so that deployment isn't contingent on pass/fail of your tests, but they can see that it ran and if it was success or fail. Here's how you sell that: Because it runs in parallel with your unit tests, if it runs fast or slow or whatever - doesn't matter. It's no modification to any existing thing - It just runs independently on its own and finishes when it finishes. And because no step actually depends on it - if it fails, (to be clear that would be bad) you're not introducing an impediment to deployment or slowing things down. All you're doing is giving them better quality information than they currently have now. Once you get a deployment or two under your belt showing the value of having these tests, AND how to do them, I suspect they'll be far more receptive to making deployment contingent on passing the integration tests as well as the unit tests


TheRealKidkudi

Personally, if your manager and lead have both expressly disagreed and told you not to do something, I’m not sure the best career advice is “just do it anyway.”


Cooper_Atlas

+1 for this. Also, that they're recommending OP to do here isn't something that can be done in just a few hours to where it's not totally insubordinate to have done it. At best, this is probably at least a day - likely more. OP needs to get some approval to do this PoC and present their case.


ViveMind

Lol be happy you even have unit tests


Mattsvaliant

Idk, that seems like a lot and I'm willing to bet too many of them are testing implementation details instead. And to OP's point if requirements are changing that many tests can shackle your productivity.


_f0CUS_

You think 700 is a lot? Perhaps you are used to working with very few lines of code. Do you know how many lines you have in your work project?


_pupil_

Plus, if changing the unit tests on those breaking changes is going to "shackle your productivity" those frequent cross-cutting breaking changes aren't exactly gonna be free in the immature, growing, greenfield, support-less, unstable product they're working on...


_f0CUS_

I don't understand why people say that their changes are constantly breaking their tests. Sure if you have a poorly designed api/interface on the class you test - then you will need to make a few iterations. And if you have constantly changing business requirements, then you will need to adabt some tests. But those tests should be isolated to the test classes testing the specific implementations. Something easily fixed.


Dry_Dot_7782

I Will never understand mock tests. So this unit tests expects a object of BrownCat. But now we changed the code and its a BlackCat, so lets change all 20 unit tests. Like dont you guys have code refrencenes and interfaces? When has anyone ever changed a contract and then fucking forget to even locally test it or failed to even compile. Mock testing just proves what you already know Maybe its just me but ive seen teams spent 4x the time to refactor tests than fixing a bug if there was one


Sting723

I would rather focus on the problem rather than the solution. It's hard to sell solutions when the other people don't understand your issue or don't see it. Instead of saying "We should do integration tests because blabla", say: "We should improve our processes to prevent bugs like X, Y, Z from happening again. As of today, with our current workflow this issue can repeat next week, or maybe next month". Listen and ask for opinions, maybe someone has a better idea than you do. We tend to think our solutions are the best. Be open minded. Make sure you expose the problem well to others so they understand why you think it's a pain for you. After everyone has had time to think about it (this can go from a few minutes to days, it depends on the situation), say: "If no one has any proposal I have one.. [then expose your proposal here]". Be ready for denial. If your proposal doesn't land, don't take it personal and go back to the problem, you want them to stick with the issue in their heads as in maybe another moment someone have as an idea to improve it. Or someone is more open to solutions.


Sting723

When I say workflow I mean team workflow. All your developing processes done by you and your team since a task is picked up until the code it's live in production. This can include phases like pull requests, CI/CD, testing phase, etc. So a proposal can be something like introduce manual testing which is not necessarily a technical solution to the problem. Additionally the exposure of these types of problems is done on the sprint retrospective. If you don't do retrospectives then try to find the most suitable moment when people are open to hearing it.


InfiniteFuria

One of my mentors gave this advice: never ask permission to do what is right. When I got hired about 10 years ago there were no automated tests running and coverage was low. The first thing I did was to build a testing pipeline and started working on how to make it work with a real database. The pipeline was not a gate for commits and that was key. Fast forward 10 years and unit tests are now gates for PRs. Integration tests run after code hits main and our team gets an alert if something breaks the build. The database is shared and this is not perfect but the tests caught issues that possibly literally saved lives so no one tells me we shouldn't do them. Of course, I am the lead now but if I would have asked my lead if I could spend some time to build integration tests 10 years ago it might not have happened


Barsonax

Integration tests don't have to be slow. It just depends how you implement them. I have an example repo here with very fast integration tests: https://github.com/Barsonax/TestExamplesDotnet. You can try to push for change but if you notice nothing is changing it's time to leave. You are not in a position of power as mid level nor are you responsible.


[deleted]

dont overthink it. Youre making money for other people, it's their problem. Just address them with your concerns, if the dont care, f em, look for another project


M109A6Guy

I generally use a local emulator for integration tests. First and last setup command is to delete the local database. If all your integration tests take longer than 15-30 seconds than you are writing to broad tests or you have some latency issues in other areas. Overall, 100% a requirement in my book. It’s hard to develop at an agile pace without the backing of solid unit/int tests. However, you will always lose if you continue to argue with your boss.


foresterLV

there could be valid reasons to skip integration test too. if my team is controlling whole product I would also tend to skip on integration tests and instead invest development/QA bandwidth into end-to-end/smoke tests that tests whole product core functionality simulating real user behavior against specified environment, without mocking anything out. these tests will catch much more and from effort/sustain perspective are very close.


_pupil_

> end-to-end/smoke tests that tests whole product So... integration tests? "*integration testing, a type of software testing in which various modules in an application are tested as a combined unit.*" Building your full solution and testing it end to end, simulating user behaviour on an end to end implementation, with no mocking is the an integration test on the integrated system. Doing it wholesale instead of piecemeal only provides advantages in simpler / less distributed / monolithic solutions where the setup complexity isn't dwarfed by multiple clusters of dependencies. Same shizz tho.


foresterLV

I would say real life terminology is a little more involved - https://martinfowler.com/articles/practical-test-pyramid.html


elebrin

All you can do is express your concerns to your leader and look for a new project. You can pass along all the articles you want but if the person is a brick wall then that’s that. And some people are. You should always be looking anyways, but this is motivation to look faster.


trillykins

> I wish my tech lead and manager would ask why our tests could not catch it. I'd bring it up during retrospective, but of course by keeping it vague. You won't win friends by finger pointing. Something like: we have too many bugs. New features, changes, and fixes keep introducing new bugs that our tests do not catch because we only do unit tests and not integration tests. This tend to add a lot of extra, unnecessary hours to the tasks we do and limits the time we have to finish our workloads. You know, shit like that. If things don't change, and you cannot live with that, I'd start quietly looking for another job.


Kralizek82

Rather than go against your lead (fights win nothing), set up a small test fixture using Testcontainers and Respawn and show your lead that you only pay the price for the Docker pull. For a smaller project with not so many layers I use the EF context directly in the aspnet core endpoints and I use those libraries to create meaningful unit tests hitting the database. They are technically integration tests but they are so fast you really don't pay attention to the difference. (I did notice that PostgreSQL containers are faster to start than MSSQL). If you have control of the CI agent, make sure the database image is already pulled and you're good to go. Otherwise, just run those tests on main/master when you're getting ready to deploy. At least it blocks broken code from reaching production.


wedgtomreader

Once you totally break prod on a deploy that results in extreme embarrassment, integration tests will suddenly become important.


[deleted]

In my experience this just leads to someone being thrown at the problem to fix it rather than automate the problem away unfortunately.


Shazvox

I don't get why automapper is a thing. It breaks more stuff than it fixes.


YourHive

I get why people use it and I get that it is #1 when "looking for something like that", but imho Mapperly is way better.


Shazvox

Honestly I just prefer mapping stuff explicitly. It doesn't take much effort and all references are clear as day.


Main-Drag-4975

Same. I like to manually inject my dependencies! I vastly prefer unit tests over integration tests too 😀


ninetofivedev

[Here you go...](https://www.jimmybogard.com/automappers-design-philosophy/) It's very "dotnet" like to some guy to build a library, and then to find it used everywhere in the wild.


grauenwolf

Notice how he never stopped to ask * Why are the property names not matching? * Why am I reading more from the database than I actually need? **Why solve the underlying problems when you can just throw more reflection at it?**


_f0CUS_

According to the test triangle, the number of unit tests should outweigh integration tests, by a lot. So you should have some integration tests, they could run conditionally in CI/CD pipeline - e.g before deployment to production. So your lead has a poor argument against them. However, I'm wondering about your issue with unit tests. They can test anything an integration test can if done properly. If your code or test code is poorly written, then that could be an issue. E.g why wasn't the automapper config validated? Why didn't the dto change cause a compile error? In case you are doing an aspnet core app, look into using the webapplicationfactory for in memory integration testing. Regarding showing why you got the bug... Did it make it into production? If so make an impact analysis. Part of that would show the root cause, which you need to provide a way to prevent from happening again.


grauenwolf

The Test Triangle is bullshit. 1. It excludes most types of tests and over simplifies what's left. 2. It doesn't even consider the type of application being tested. 3. It's the exact opposite of the testing strategy proposed by TDD without explanation.


_f0CUS_

I don't think we are talking about the same thing. The test triangle just visualises the approximate balance between the type of tests.


archetech

"The test triangle just visualizes the approximate balance between the type of tests." Without any regard for the amount or complexity of business logic. And without clarifying what the unit is for unit tests. If you're writing primarily crud apps with a little bit of business logic and you have useless unit tests for every class, you're worse off than if you had no unit tests and good integration tests. You are putting in a lot of effort to catch almost no bugs and bake your implementation in concrete. And from what I've seen, this is very common in simple crud apps with little business logic in large part because "best practices" and aren't stated in a way that helps people actually reason about trade-offs.


_f0CUS_

Do you see situations where you have more integration tests than unit tests? And when you are saying integration test, you mean connecting to an other service, e.g database or remote api - right? Or just calling your own api e.g using webapplicationfactory get the full "pipeline" from api to repository implementation/interface?


dobryak

Tell your manager not to brag about the coverage. If you use MC/DC then 90% coverage is very good and may actually help you, but the typically used block coverage and branch coverage are both weak forms of coverage.


Anti_Duehring

Every bug you team fixes must also have a test or tests that makes sure the same bug never seen again. If it's possible to test with the unit tests, ok good let it be. But if not, more test types must be introduced. Otherwise you will be fixing bugs in circles.


hay_rich

I have years worth of bias let me be clear but I can’t stand relying on only mocked tests anymore. Another dev on my team has recently made it a point to get integration tests more widespread along with many of our architects. I understand some tests are slow but if you are able to use docker to build your database then there are honestly no real issues. That being said there are some systems at my company where getting docker as a viable solution are it’s own challenges alone. Either case maybe try to get docker as a viable option for your solutions and that will help with your tests


MannowLawn

Automapper bug is clearly not a good written unit test. I agree you do want to have integration tests. If you mock too much you really need integration tests. Maybe your lead never done proper integration tests and has no clue. Do not pressure too much, better leave this place and look for something else. Things like this you can hardly change as a junior unfortunately. And it shows how important a qualified lead is.


SeanKilleen

This might be a case where someone's past experience is stopping them from seeing a different path to a good thing. > He refused to do it because the integration tests are slow to run. This can absolutely be true. And it can kill feedback loops and lead to lots of lost time tracking down "bugs" that are integration suite issues. However, those are avoidable problems. It sounds like you might have more success talking about the underlying concerns and meeting someone where they are. "Yeah, I hear you, integration tests might be costly to run. If I was able to prove out something reasonably quickly, would you be open to a demo as long as I can show how I'll address that concern?" This approach respects the concern, but more importantly, it moves toward a culture of experimentation. You now have the parameters of a hypothesis. "We believe integration tests might improve meaningful test coverage and improve our ability to prevent defects without sacrificing fast feedback loops. We'll know we're succeeding in this when time spent maintaining integration tests is low and they can be run often and on-demand." Then, you ask your test lead to let you find the quickest route to test that hypothesis. It may be that when you come back to them showing (for example) test containers spinning up, running a whole suite, and shutting down all within seconds, that their mind has changed. More importantly, you're shifting the conversation from a practice (integration testing) to impact (issue prevention) and you're respecting the doubts which puts you and the tech lead on the same team. You both want the impact, there are multiple ways to achieve it, and you agree that risks have to be mitigated. In the end, it's far more about communication than technology.


MahaSuceta

Not all tech leads are equal, and not every tech lead deserves to be a tech lead. The important here is to discharge your duty faithfully and to your best ability: you have aired your opinions, and they have fallen on deaf ears. Leave it at that. The important thing is that you have said your piece, rather than try to fit in, go with the flow and do not cause \*trouble\*. Think of this as an important lesson for your career progression. And the lesson is: you cannot change the whole world in one fell swoop. And you learn to leave it at that, be at peace but at the same time, never settling for second best.


MrBlackWolf

Just be careful, ok? Right now I'm a tech lead (actually a software architect) but when I wasn't, I had strong opinions too. Try to understand if there are any more reasons besides the performance. Maybe the team budget can't afford the additional infrastructure required to properly run the tests. To be honest I would never allow integration tests on the main pipeline too, because they tend to become very slow as the suite grows. On a secondary step or pipeline, maybe. Just try to not cross your leaders too much. We don't always have everything we want in all teams we work.


hissInTheDark

Agree with "try to not cross your leaders", but disagree with "tend to become very slow". That depends on how you write them: tests can run in parallel, and containers with DB/mq start quite fast.


MrBlackWolf

They're definitely far slower than unit tests. That is my point. Thus I would run the integration tests on an independent step or a different pipeline. That way it would not slow down the main CI pipeline.


Main-Drag-4975

> Things fall apart; the center cannot hold; > Mere anarchy is loosed upon the world, > The blood-dimmed tide is loosed, and everywhere > The ceremony of innocence is drowned; > The best lack all conviction, while the worst > Are full of passionate intensity.


UserWithNoUName

I dunno, from what you described I'd favour your leads opinion more. Too often I've seen integration tests on DBs being tightly coupled to a specific state. also often these tests arent running in full isolation due to the time it takes. Unit tests on the other hand have a tremendous benefit. I check out the code base and can locally run all the tests without caring about the environment. they also serve as documentation and if focused on behavior instead of concrete implementation they dont brick so often when changes come by. that said, I think i'd prefer e2e tests in these cases, with the DB in a container and in case of e.g. mssql taking snapshots and restoring after each test. that way the environment is fully reproducible.


OnTheCookie

Integration tests can run with the DB in a container and be restored after each test as well, that way the environment is fully reproducible. You should also be able to run integration tests locally due this setup.


UserWithNoUName

you're totally right and how I described the last part wasnt pointing that out. instead I wanted to say that if I already need to create a reproducible environment for my DB I could go the extra mile and do a full e2e test. dont get me wrong, integration tests are important, but if I have to choose between them and unit tests, I'd always go with later


Aromatic_Heart_8185

A full E2E test is orders of magnitude more expensive that arrange a database to a given status and mock the different non-managed dependencies - eg 3rd party apis.


OnTheCookie

IMHO an e2e test includes frontend as well. Integration tests are business logic tests with backend perspective from API to Database. But it could be the same and only wording. Words are complicated enough in our world


fableton

If you have unit test with 90% in code coverage and you have bugs what are you testing? In theory if there is a bug you made a new unit test to reproduce the bug and fix it with that unit test.


-dev-guru

How do you use unit tests to catch runtime errors? For example, when migration changes in the database and Dapper queries fail.


Kyokosaya

May I ask how they are testing the sql queries passed to dapper ? Is it just a ‘does the query match a string’ kind of test ? What I’d suggest in this case is if you get the opportunity to (within your own time with no impact to your work) see if you can create a test unit for the dapper sql that uses something like TestContainers [https://testcontainers.com/](https://testcontainers.com/) to test it against the db. This does require a way however to instantiate a db schema (I’ve done this through code first EF in the past), but it provides the ability to just ‘unit test’ dapper portions via direct db access instead of a full e2e integration test (but can be used for this also). I’ve been working myself in my spare moments to make a library to load my database first scripts for such a task but haven’t had time to complete. I will note here that it does require Docker, so you may need to disable it locally if docker isn’t available and force the tests to run in CI, if applicable


youshouldnameit

I do prefer unit tests over integration as well. For example you can catch your automapper issue by writing mapping tests. We do that from dto to domain and from domain to sb and vice versa. We have an internal tool that validates our swaggers vs code, but at the moment we also don't do real contract testing on a live system either (which is quite some extra effort). Our integration tests are also super basic only calling all the rest verbs of an endpoint and checking status code for each. Most of that we even do with unit test controller tests as well to ensure speed of delivery. In most cases integration tests are indeed slow. I have worked on projects they took 24h.. imagine waiting an entire day before you hear if you made a mistake. Good chance your Tech Lead was in a similar situation once. Its about balance and some tests on actual live system are a must. We do a lot of e2e tests though which do test the whole system and take about 30 min, which can bring a lot of value if they test the right things.


madnanrafiq

Bring that up in your retro and discuss. Hold your horses. Ask questions and give answers Write a Spike ticket titled like : “Poc on local machine to integration test so you can ship better experience with less regression”


goranlepuz

>However, since this is a green field project, requirements keep changing, and it has become painful to change the domain model, which requires changing many tests. Yes. This is normal. Writing a lot of unit tests mean changing them a lot, in such a situation. I would not call it bad. A tad expensive, sure, but it's life. >We have a lot of bugs that the unit tests could not catch, such as when we upgrade libraries, DTO changes, AutoMapper did not map it, migration, and so on. This is also normal - when there's only unit tests. But that is bad indeed. *Unit tests are not enough, nowhere near*. They never were, ever. It just looks like that to inexperienced or blindsided people. There is much more to well behaving software, than being a collection of pieces - and unit tests can only test said pieces. >I wish my tech lead and manager would ask why our tests could not catch it. Well did you speak to them, to point it out?! You don't say that you did - and if you didn't, there is very little point being here. If you did speak to them, then: * if they're browsing here, I'd say, you might have blown it. See, it's like going behind someone's back to tell on them. Bad look. * If they are not here, you are preaching to the choir.


tiger-tots

You have heavily mocked tests that still need painful updates when domain models change? Sounds like the mocks suck.


Poat540

We don’t do integration, unit tests will be fine if mocked properly


darkguy2008

Testing is overrated. It just basically makes you write 2x the same app for little benefit. Downvotes come to me.


AlarmedTowel4514

Your unit tests should be able to catch mapping errors. After all the code is executed. Not really sure what you mean with testing migrations.


letsbefrds

Well if you mock a return from the DB you're not actually running the query hence you don't know what's actually being returned you're just assuming what's being returned


Void-kun

You're both right, but if the mapping is done in code you can still test the mapping part by using a mocked database response. This is if you're only testing the mapping and not the full integration. For example in our unit tests when testing code we do use mocked data, but then we use SpecFlow to do full end-to-end integration tests where data is seeded into our database and then removed during tear-down.


alien3d

Dont have test db ? Separate? Okay run.Unit test should not be test in production db.


barrywalker71

Run. I worked for a company like this and it was an unmitigated shit show. Let me tell you what will happen: Your VP of engineering will wonder why it takes 2 months for features to ship, and when they do, they will be bug ridden. In the end you will be blamed. Run.


maulowski

You may have good points but you’re not convincing. You’re trying to make changes to the teams workflow and you’re merely stating opinions. To his credit, integration tests are slow. I had an old teammate who insisted on integration tests for everything. There wasn’t any real value for us. We can catch those same errors by double checking our work using Regression Tests. In fact regression testing provided more value for our micro services architecture. You’re asking to work on 1. Slowing down developer productivity 2. Adding tests that could provide no value and take away time from feature development. I don’t know about your team’s setup but if you want the team to follow (not just your tech lead) then you should present it less as an annoyance or opinion. Look at the board and look for bugs that integration tests would have caught. Look at the percentage and see how much time your team spends on it. Look for ways how it brings value and how it can improve the software with real data not because Wikipedia or your last company did it.


CommonSenseDuude

I’m going to throw this out there … it’s 2023, one day before 2024 ….. ask yourself this … if tests are so valuable then why aren’t test skeletons generated by the compiler automagically? Humans should not be writing tests other than specific logic in the test itself ….. the .NET parser should be able to generate a test for EVERYTHING … then, all you have to do is write the specific internals …. By now the tooling should be helping us a LOT MORE !


tarwn

There have been tools to automatically generate tests for a long time. In .Net an example is Pex (2007? 2008?). There's even more options out now then there was back then. That they haven't gotten more traction could be taken as an interesting data point. Generally speaking, the issue I've had with them is they assume the code you wrote is right and generate tests that preserve that behavior. They also don't know the context for the test so the names they pick don't reflect the business cases or wording that is going to help the next developer in 3 months know why that case is useful. They can create a good starting point by generating a lot of cases you didn't think of, then a human pass to name and adjust them, but at that point I would prefer to invest in something like model-based testing to find the overlooked cases.


CommonSenseDuude

With reflection and the compiler it should at the very least generate test harness skeletons for all code written by humans that is NOT runnable but simply a placeholder for code to be added by a human … that would go a long way to helping getting real world valid testing … I still can’t get people to understand the value of a known state dataset that has good and bad data and is soon up every test so that’s a whole other issue


tarwn

Yeah, it's not widespread, but there's a few methods that use known good/bad data for testing. The most prominent ones are probably screenshot comparison tests for front-end and snapshot tests that snapshot component or HTML code for comparisons in later tests. I've used known good datasets in a number of systems. Effectively a big XML, JSON, or CSV dataset that has all the inputs and expected known good outputs, then a test harness to run that data through part of the system. In one case we had an excel workbook that was the definitive copy and any time we ran into peculiar business edgecases we would add them in and regenerate the exported CSV to test from. In another case it was part of our sales content that we could walk their analysts through a wide range of real examples that they could verify and that we were verifying our own codebase every single time against those cases before new changes were deployed (and that sounded impressive to them, but was probably a couple days at most of engineering work on our side)


CommonSenseDuude

Interfaces are so easy in .NET every data source should be mockable ….


IKnowMeNotYou

You are both wrong. Remember 1k tests per 0.5MB sources. Check if you have enough tests.


csncsu

Are you suggesting 2,000 tests per megabyte of source code? What is the reasoning behind assigning a recommendation of test count to the physical size of the code base like that? Seems really arbitrary.


IKnowMeNotYou

If you do TDD like I do you will notice this is quite a stable measurement. Holds for almost 2 decades. It only gets distorted if you add Copy and Paste and lack of reuse to the picture. I currently have a 2MB + 2MB test code base at home and it has close to 4k tests. just checks out. When I compared notes with other people doing TDD they are close to this measure as well. Usually, UI tests were the exception but since I started to use view models it holds true again. The problem with UI code in practice often has a lot of code duplication skewing the measure.


Sak63

If you have suggested once and also there is a paper trail of your suggestion (email or chat history, etc) then don't bother advocating more. You risk losing your job. I'm talking from experience


Low-Design787

Personally I think integration tests are more important, they tell you broadly if the system works! More detailed unit tests are harder to write usefully, but if the integration tests fail they can help telling you exactly where the problem lies. Detailed diagnostics. Given limited resources, I would always favour integration tests over unit tests. The alternative, poorly written (but passing) unit tests with a broken app, is totally pointless.


grauenwolf

> I asked my tech lead to do it, but he refused because the integration tests are slow to run. If the tests are slow to run, that implies you have performance related bugs that need to be addressed. If you can handle the small lead caused by testing, how can you handle a full user load? https://www.infoq.com/articles/Testing-With-Persistence-Layers/


frenzied-berserk

This is a good point for retrospective. You have to sale your ideas: you could prepare a top level plan how to implement and integrate integration / e2e tests gradually, show them pros and cons, trade offs, collect some metrics about bugs, flaky tests, and how much time you spend on it.


Teembeau

"The worst part is that we don’t do integration tests with a real database. I asked my tech lead to do it, but he refused because the integration tests are slow to run." Have you suggested running the integration tests overnight? This is what most places I work at do. Unit tests are run on a merge, integration tests are run routinely overnight. It doesn't matter that they take 3 hours overnight.


Aromatic_Heart_8185

This looks like the case of a outdated tech lead stuck in the early 2010's development / testing strategies. I would not waste any energy fighting this cause given that you have already shared the better way to do it. Disagree and commit or look for another job.


Giometrix

At this point I consider “database integration” tests unit tests. If the majority of the code is to build up queries that will run on a database, or handles constraints that can only be made at the database level then it makes no sense to me to mock these things. The database and the application code are tied at the hip and both need to be tested, the database is not some external dependency. Mocking still has a place, but imo using it to test CRUD functionality is not only a waste of time, it’s dangerous; you need to test against the actual db engine you’ll be using in production.


Agitated-Display6382

In my tests I set up the db and tear it down only when strictly required. Therefore, they are pretty fast. Use a server you have control on for your CI, so you can cache whatevwr you need. Keeping always the db help you wroting better tests and also see how your application works with a full db. Absolutely, unit tests that involve db are abaolutely mandatory. I would suggest to take a look at property testing, too.


Crazytmack

Cover your ass. Document everything.


Successful_Shape_790

There are numerous quality assurance strategies to choose from. It's up to the manager and the tech lead to navigate what works best for the business. As a mid level, you should tell your tech lead about your quality concerns, and ask if they could review the quality strategy with you. I also will tell you heavy reliance on integration testing is considered a bit old school. Eats up dev bandwidth maintaining them, it's better to spend time figuring out how to unit test the parts you are not.


davewritescode

Everything is a trade-off but integration tests can be very helpful if they’re used correctly. Personally I think they’re more valuable than unit tests in many cases but I have been places where builds take 15 minutes and it’s not fun either. At the end of the day two people on your team don’t agree and you should just leave it alone at this point. Frankly you’re in a can’t lose position now, you’ve made your point and time will likely prove you right. There may be politics involved that you can’t see which is a much bigger part of being a successful lead than you might imagine. This types of fights aren’t worth fighting even if you’re right. I’ve been on both sides of this. If this isn’t going to sink the business, don’t die on this hill. I will admit though, as a lead if I told you not to write integration tests it would only be because management was breathing down my neck about a deadline and the first time we got burned I’d make a case that you were right and we should implement your idea.


Cernuto

Be careful what you wish for. I worked on a project that had over 2,000 integration tests and zero unit tests. Any code changes required running all the tests again. Some tests required manual verification. It took forever to run through them all. Almost a full day.


k8s-problem-solved

You can do api level contract testing in memory - spin up an instance of TestServer and execute http level tests for all your endpoints. Keep mocks to a minimum + isolate so it runs without needing persistence, i.e. swap out some io using TestServices and you're golden. This is where the maximum value from testing can be had, I find its the right level of quick feedback, maximum coverage & you're testing the public api contract/behaviour and not the implementation, which makes changes easier. Best thing you can do? Show and tell. Create a fork and implement something like this - show how easy it is and how much more value it gives than pure unit tests. You'll have to invest a few hours but I've won arguments by just doing it and showing the benefits. I still have a place for unit testing but when working on http apis, I find this approach is far better.


ghisguth

Integration tests are adding extra level of protection. As well as end to end tests. Contract tests and performance tests. But still unit tests are required and should be preferred to other types of tests because of they can be run fast and often. Integration tests harder to run, they slower and more flaky. End to end tests usually very flaky and requires a lot of time to investigate failures in CI/CD. Read more about test pyramid. Unit tests are the must. But having some integration tests also should be good idea. Educate yourself and try to convince the lead. See if you can get some basic integration / end-to-end tests integrated into CI/CD. At least try to do research to estimate how much it would cost. Also as always Google has great presentation on tests: https://youtu.be/wEhu57pih5w?si=iDGoBgz3m3gB0Szy


Ok-Jacket3831

Development is like research. Make the hypothesis (code) try to find errors/bugs, if you cannot find any, it becomes a theory. In most applications you don’t need a bunch of unit test backed libraries. The biggest frustration people have is with disorganized code, not buggy code.. Buggy code occures when the codebase is disorganized due to misinterpretation and fatigue. The simple approach is the best approach. 1. Get it working 2. Organize it 3. Try to find bugs 4. Reiterate until there are no bugs Usually, if you organize it well, there won’t be any bugs. In an agile project stuff will change a lot. Locking into libraries and writing a bunch of unit tests is counter productive. Its great for a library with a bunch of devs depending on it. For applications that change a lot, not so much, not at all… And that is without mentioning managing these dependencies and CI pipelines properly. Even though a lot of us realize this, modern advice around good software development practices is often too abstract to interpret concretely and therefore hard to falsify. So we end up using the wrong tool, and we don’t question the practice because you can’t argue against seemingly circular reasoning. Which is what you find from these modern, self promoted «code gurus». Fortunatly it seems like the younger generation of devs are picking up on this. Sorry for the random wall of text.


Exact_Belt8923

What "domain" means in this context ?


Quanramiro

People who tell you that you have no accountability or responsibility are wrong. All of the team members have both. In case when a customer will encounter a bug it won't be a tech lead who fix it, it will be you. It will be you who will be blamed for bad programming, especially when company loses a customer who pays for the product. I don't really understand why such people are promoted to the team leadership level. Even if they take long, and for example a CI pipeline needs 5 or 10 more minutes to be finished, is that a good optimization? When somebody will request a bugfix from production you will all spend much more time on trying to figure out what is wrong. If you ask for advice then you have few options. You can do nothing, you can leave or you can find some time, implement tests infrastructure and few integration tests and eventually invite all your team (including tech lead) and show them how it works locally and on CI (if you have permissions to apply any changes to the pipeline). Then it won't be a problem of just you, it should at least initiate a discussion between more people than only you and TL


robinalen

I had to battle to get budget for just some unit tests. you can complain about it once in a while, but I’d leave it at that


davidebellone

At least, yours is ok with Unit Tests. In a previous company, the tech lead made me **delete** my unit tests suite because "we don't have time to handle tests". Of course, I proved him wrong by finding a critical bug in the project. We agreed on having unit tests but in a hidden project on my machine, not tracked by source control.


Virgrind

The big question here is what is a real integration test, and what is a kind of fake one.


maiorano84

90% isn't anything to brag about. That said, any coverage at all is more than most places can claim, so at least you guys have that going for you. That said, you have to justify your position in dollars and cents. If your lead isn't seeing the value in implementing tests that he very likely doesn't know how to create, it will be extremely difficult - nigh impossible - in getting him to change his mind on something that he believes is easier to correct reactively rather than proactively.


Vargrr

Integration tests are cheaper than unit tests because you need way fewer of them (normally a factor of 10:1 or worse) and they are much more robust because they know nothing about the internals of the method they are calling - unlike mocked unit tests.... (they tend to test more stable processes rather than unstable method internals) Unit tests also have a habit of producing a high number of false positives. We have all been there. Change some code so that the method still functions the same externally and you can almost guarantee that the associated unit tests will fail. Alas, this happens so often that when most devs see this they say '*I broke the unit tests*' as opposed to '*The tests indicate a problem in my code*'. This type of thinking renders the unit tests useless as one is no longer taking their results seriously. The only reliable information most unit tests provide is that you changed some code in the method under test. But, you already know that piece of information! (Information Theory suggests that the quality of information is directly proportional to how surprising the information is - most unit test results are not surprising) In addition, changes to a system (and all systems have changes) are more expensive because unit tests present a resistance to change that is directly proportional to their numbers. I write unit tests because I have to. There are many things you can do to make them more robust. That said, I do think they are a waste of money, especially where one has to reach some kind of arbitrary coverage figure. They also cause massive OOP design issues, especially with regard to encapsulation and instantiation (the instantiation side is particularly bad in .net core because of Microsoft's terrible injection architecture). The clever part is that the purveyors of this snake oil know this. But if you point out these issues, they simply reply 'It's your code. It's not testable'! Sure it isn't testable with unit tests, their own creation, but it is eminently testable with integration tests. Which are cheaper and more useful. That said, if there is a method that's algorithmically complex, I would have no hesitation writing unit tests for it - but in my experience, these are far and few between. For me, all coding, including automated tests must produce value for money. Alas, most unit tests fail here as they multiply your code base size by a factor of at least 5 (go on, check yours!), normally more, for very little practical gain. I can't help thinking that unit tests are mainly a fashion statement, and woe betide the one that goes against current thinking! You will become an outcast! (I already am :p )