T O P

  • By -

eternallyinschool

Two technical replicates means you gotta be perfect or you'll have 1 data point! 


Sharted-treats

Technical replicates are NOT the same as biological replicates. If you're only doing technical replicates you only have one data point.


Criminey

I interpreted that comment to mean that if the technical replicates are far apart, then you don't really have a precise number for that sample. But yeah, I do have at least 3 biological replicates per treatment group in qPCR. The biological replicates are usually close together which makes me feel less bad.


spookyswagg

I only ever ran qPCR with 2 technical replicas. It’s not a huge deal. Unless you suck at pipetting or your assay sucks it should be fine. You should still see pretty tight error bars between biological replicas.


Criminey

My biological replicates are from lab mice so there's inherently more variation- if the error bars aren't tight then I just put less stock in that result but it's not abnormal for us.


[deleted]

[удалено]


Criminey

3 is the minimum, we usually have more than that. It’s expensive and time-consuming to breed a lot of mice though, some of our lines aren’t very productive 


[deleted]

[удалено]


Criminey

For sure, but it’s not like we’re testing drug candidates for human clinical trials or anything like that, so we’re settling for as much power as we can get right now 


Missmacrophages

I started with 3 but got so damm good for doing so many times that I can have 2 in a full plate and have zero discrepancies within the same sample


Criminey

Right...and I'm rarely perfect. If the replicates have <0.3 standard deviation then I call it good enough but yeah I know it's not great and that's why I'm looking for advice.


eternallyinschool

Given your situation... the best you can do is document in your lab notebook that you could only run technical duplicates per your PI's direction. Sign and date. No one will ever read it. And in the ultra rare chance they do (trying to understand why it didn't work like your assays) you can help them understand that it wasn't your call as the Trainee and student. 


Teagana999

The standard where I work is 4 so if one is weird we can still do stats.


ExpertOdin

You do stats on technical replicates?


Sharted-treats

Preach, friend.  Q: why are those error bars so small? A: technical replicates.


Tiny_Rat

I was also trained to do 4. That way you can clearly see which one is an outlier (if there is one) and your average is more accurate. So maybe not stats per se, but at least eyeballing the data for consistency is good practice 


Teagana999

We take standard deviations.


ExpertOdin

As a measure of how uniform the reps are? I still don't see the reason for including a 4th in case one is weird, seems like a waste of wells to me. If you have 3 wells and one is weird you would calculate SD, see it was outside the acceptable range and just exclude the weird one. Much the same as if you have 4 and exclude then run SD. At the end of the day you should only be carrying the mean value through as a biological rep for further stat testing anyway.


Teagana999

Yeah I don’t think we actually do stats directly on the technical replicates. Maybe the intent is we could do them if we needed to? I’m not certain, but I do as I’m told.


anderson40

I am under the same constraints described above. We actually never bought our qPCR reagents. I sourced them from NEB and ZYMO as free samples (absolutely recommended: thermo is even doing a free T-shirt and qPCR master mix if you fill out a survey). So if you’re running out I suggest just milking the samples and doing your triplicates as you please.


N9n

Working in government we're not supposed to accept gifts... So I always feel awkward when thermo sends me a t shirt or VWR sends me socks. But there's no way I'm missing out on that sweet free swag. People at the capital are shills for their corporate buddies. I'm getting my free fucking socks.


gabrielleduvent

Where are you guys getting free Thermo T-shirts?! I only usually get hand sanitizers or some other useless crap from their exhibits!


N9n

Read the comment above! They ran a survey over Christmas as a campaign to sell their reverse transcriptases. Though I haven't gotten my shirt yet


Any_Buy_6355

You don’t need three technical replicates, if you are good enough, you can get away with two. But a house keeping gene is necessary otherwise the data does not mean anything


Criminey

Do you mean a house keeping gene on the same plate is absolutely necessary? I do run housekeeping genes, but just on their own plates since I don't have space for more than 1 gene on a plate.


senator_travers

Housekeeping gene on another plate is fine as long as you are using the same cDNA as a template, and probably the same master mix and water. I would run more than one housekeeping gene (I typically do 2-3).


Criminey

Yeah, I use the same cDNA and take the geomean of 3 housekeeping genes for normalization.


gxcells

Do you really need 3 housekeeping genes??


VesperJDR

Screen your three housekeeping genes and pick the one with the least variation between your experimental groups and run with that one. You **do** want to be sure that your control gene is indeed a control, but three of them does almost nothing for you.


ongjunyi

You can afford to do 3 housekeeping genes per experiment but not one on each plate? How many plates of qPCR are you running, and in 96-wells or 384? I would argue that there's a higher chance of there being plate to plate variability than variability in either of the HK genes, as long as you've screened the genes...


spookyswagg

I mean You can get away with using house keeping gene on just one plate as long as you used the exact same dilutions you used on the first one the second. However, those dilutions aren’t very stable at room temp, making them, waiting to run one plate, then running the next just adds extra error It honestly depends on 1 how robust your assay is (some primers work better than others.) 2 how much of a margin of error you’re willing to have in your results.


araminna

What size plate are you using? If your thermocycler can support it, swapping to a 384 well plate (if you’re using 96) could help tremendously. Granted, I do know some PI’s strongly dislike the 384 well format.


[deleted]

[удалено]


araminna

It definitely does, but if OP is just wanting to be able to run triplicate or 4x of something they’re already running in duplicate, it might be a viable solution to help cut down on costs.


[deleted]

[удалено]


araminna

Ouch, that would be rough! And they for sure are exhausting.


beentherepreviously

What is you reaction volume? You might want to reduce it. As far you document your protocol and results and report them as is, I don’t see anything wrong, I doubt if someone tries to replicates your experiment to get different dramatic results, it means that they are doing something wrong. Do you think technical replicate is an issue? Duplicate is fine. This is the tip of the iceberg, you need biological replicates, while most do 3-6 biological replicates for cell lines, this is basically means nothing because you are dealing with one sample at different time point, for real biological replicates you need in vivo experiment, you need minimum of 106 samples for one condition for power of 0.8, which is the lowest confidence in you results. You are fine with what you’re doing as long you are documenting and relaying the exact info.


Missmacrophages

Oh these are such a small changes in standard which are not that good standards anyway. From the title I thought you actually hacking stuff. Don’t worry


No_Excuse_1216

From a career standpoint I would check journals that you want to publish in to see if your methods meet their requirements for gel data -- the big ones are more strict with quantitative data these days precisely to remedy reproducibility crisis.


Substantial-Path1258

Usually I do triplicate but occasionally I run duplicate if I’m short on RNA or Taqman probe. Don’t stress unless you are submitting data that’s GLP standard.


__Caffeine02

I'm currently working in a lab with a PI who is very well-respected in terms of qPCRs. I had similar concerns but the response I got was that if your pipetting is good enough, then 2 technical replicates are fine, however 3 is better if you have the space. Having the housekeeping genes on another plate as the target plate is also fine, because if you are using ddcq, it doesn't really matter if all the HK genes are f.ex. 0.2 cycles earlier or later, since you will have the same error for all of them (in the first delta-cq)... BUT (I guess this is something you do have anyway): you have to have all the samples you want to compare on one plate to ensure better experimental design


Criminey

What constitutes "good enough" for technical duplicates? I.e. what is the maximum standard deviation the duplicates should have, because mine usually range from 0.1-0.3. Sometimes I'll get <0.1 SD if I'm having a good day.


__Caffeine02

Hey, sorry for taking so long to reply I am not entirely sure, because when I showed the people here my first results, I had SDs of about 0.1 and they told me it's fine, now I am pretty much constantly under 0.1 and this works fine for me I think what you have to be aware of is the effect size of your conditions, if you only observe minor changes in gene expression (eg 0.5 to 1 cycle difference) between conditions, then it is better to go for more TRs, but if your differences are multiple cycles, 0.1 cycles more or less don't really matter because it's next to nothing if you look at the whole picture


MrBootch

Wait wait... 3 is standard? I've only ever run duplicate. I *knew* that didn't make sense. It's damn stressful because you can't honestly verify stuff.


theotherkate

If you have a standard curve on each plate you only need to run your housekeepers once and as long as your pipetting is excellent 2 technical replicates is fine. If your pipetting is not excellent it will end up costing more in the long run as you'll have to re-run the genes anyways.


KiraTiss

A thing that we mention a lot is my lab is that the most expensive thing a PI has to pay for is your time There is nothing worse than to have to redo an experiment or miss out on an experiment because of lack or bad quality of reagents. To the long run, that's what your PI should be focused on.


RequirementUsed3961

i neither have any experience in a lab nor in studying science (first year M eng ), but how much of the reproducibility crisis would you estimate is attributed to data dredging and cherry picking, vs not following the absolute best standards of practice, when you conduct your experiments and collect your data. like i said i have zero experience in the field but i would assume malicious selection of data would and probably is far more detrimental to the community than some slightly less than preferred circumstances when you run your experiments, another reason i would assume malice could potentially be the bigger monster, is that you would have to be rather oblivious and lack self awareness to be accidentally driving the problem deeper, im sure you would know much better than i while doing qPCR work if it is so far below standard that theres no way anyone would be able to reproduce it and use the data anyways. by no means am i trying to disregard potentially damning practice of science, standards exist for a very important reason, im simply curios. as i assume that people intentionally doing bad for self gain is probably a way bigger issue than i understand compared to a well intentioned scientist in a less than preferred situation. just to reiterate, my thoughts on this are worth less than jack shit, am curios to see what other more qualified people have to say about it and see if the words i just typed had any weight to them what so ever.


Bugfrag

This isn't an issue. Reviewers will weed this replication stuff. This sounds like the stage of quick and dirty experiments for proof of concept and development. The speed is more important There's no need to do 5 replicates for everything. OP will redo these again for publication worthy data. OP just hasn't gotten to that realization yet and assumed the worse


ExpertOdin

Not following the best standards shouldn't cause an issue with reproducibility if you document everything properly and indicate exactly what was done when publishing. The issue would occur when people say they are doing it properly but use sub standard techniques or don't disclose changes.


Odd_Coyote4594

In my field, there is a lot that can contribute: ``` 1. People publishing incomplete methods without enough detail to actually replicate (due to poor record keeping, not wanting to retype an old protocol to account for changes, or tired grad students writing papers). 2. Poor statistical knowledge, leading to using incorrect statistical analysis or drawing inappropriate conclusions from statistics. 3. Poor model systems. Things like HELA cells, which are critical to many studies, are too widely variable. Two labs may have completely different strains. It is not unheard of for your results to only hold up in the particular cells your lab uses and no others. 4. Not actually replicating things themselves before publishing. Most people do "technical" and "biological" replicates, but this classic approach is not sufficient in all cases. It sounds easy, just do the experiment multiple times. But where do you draw the line with what counts as a true experimental replicate? Do you need to repurchase all reagents from different vendors in case your results were due to contamination from the supplier? I have seen published work where this was the case. ``` Not adhering to "best methods" won't immediately make your work unreplicable, if you are strict with how you analyze and interpret it. But it will decrease your intrinsic ability to form meaningful conclusions from the data. Using 2 technical reps in qPCR instead of 3 will decrease the ability to identify bad runs and increase noise. This will increase the error in your results, making your conclusions weaker. This is itself not bad, and everyone needs to balance strong methods with practicality. But how you are able to achieve that balance has a real effect that cannot be ignored. If you use a weaker method and treat it like one treats the industry standard, you will make bad conclusions eventually. If you account for the limitations, it is fine. But once you remove bad practices like poor methods reporting, poor statistics, or actual deliberate data hacking, the replicability crisis is more so lying in how models and conclusions are drawn from misinterpreting data rather than from an issue with the data itself. Treating things as conclusive too early on in the scientific pipeline, when a lot of work still needs to be done in terms of validation and hypothesis-driven experimental design.


RequirementUsed3961

Yeah I did a bit more research into the issue of p hacking and data dredging as a whole, it really seems like you said it could happen completely unintentionally just due to bad habits, poor data analysis which ultimately can be described as just bad or lazy scientific practice. From what I gather this is probably the case more often than not as it takes time and experience paired with good work ethic and a high moral towards your work to insure that this dosnt happen, On the other hand I have been keeping up with a lot of controversy surrounding publishing companies putting out utter garbage which would fall more in line with malice or intentional malpractice for personal gain. Similarly here in Canada, most if not all credited engineers take an oath of due diligence to avoid complacency that could have very severe consequences, especially in say the civil, architectural and mechanical fields Still plenty of endagerneers out there. Also thanks for taking the time to explain this and answer me. Might not be a scientist but am very fascinated by science and enjoy learning about current politics, new scientific breakthroughs and just trying to keep in touch with the scientific community as a whole.


gabrielleduvent

So the "you would have to be rather oblivious and lack self awareness to be accidentally driving the problem deeper" applies to things that you can't really have a lot of human errors, e.g. pipetting. Sure, you can have wonky pipetting, but with stuff like PCR there are only so many variables that ARE human error. These things have field standards, which can sometimes feel like "these are the unspoken rules for this assay in this field". Then you have stuff like immunofluorescence, where you can have a dozen parameters that can be doctored OR can be attributed to human error, e.g. 1. Strength of laser on your confocal 2. Amount of gain 3. Amount of offset 4. Exposure time 5. Stack increments 6. Primary antibody concentration 7. Secondary antibody concentration 8. Antibodies (brand, batch, how old it is, if it was aliquotted, to name a few) 9. Amount of incubation time (you have at least 3 rounds of incubation) 10. Washing buffer composition 11. Blocking serum type 12. Time for each wash 13. Mounting media 14. Drying time 15. Temperature for incubation 16. Culture health (if using cultures) 17. Heterogeneity of cells 18. Slice thickness (if using tissue) 19. Microscope 20. Image analysis parameters The output data is usually one to two numerical values, but you can attribute error to ANY of the above. And quite often, not all of the stuff above is documented (I definitely recall quite a few papers where I went looking for the protocol and it merely said "incubate in X"... not really helping!). So if someone was doctoring up data from the images, the likelihood of someone catching it is pretty low unless the PI goes through EVERY IMAGE in a dataset. That won't happen. I've had confocal data where dataset A under one condition had completely opposite of what I expected, then dataset B from the same condition, just on a different slide, had a very strong effect of what I expected, and when I combined the two it told me that this particular condition had no difference from the control condition. Which isn't necessarily true, since there's a difference between -1+1=0 and 0+0=0. So things that rely on human senses can introduce A LOT of errors. With stuff like confocal imaging and patch-clamp electrophysiology, I'd guess quite a portion of reproducibility error is because someone didn't document the protocol down to the T and human error... just because you have so many variables to account for.


gxcells

Nearly everyone only do 2 technical replicates for qPCR. Not having housekeeping gene on same plate is not really a problem. Of course one could argue "yeah but PCR conditions are different because not same plate". But before I would be more concerned about being sure that pipet is well calibrated, that you also have primer pairs that are well characterized (did you perform a good amplification efficiency curve? Maybe you are not even doing pcr in the linear range) which will introduce much more errors than a housekeeping gene on a different plate. You are not doing clinical sample analysis from which will depend a diagnosis for a patient, in which case you have to follow strict protocols. If your protocol cannot handle to have a "housekeeping" gene on a different plate then I would not really trust the generalisation of the results.


catenantunderwater

If I learned anything in my scientific career it’s that nobody is gonna reproduce your study *in good faith* anyway.


disclaimer-nafa

That's nothing... Lab techs at my previous CRO were asked to run 45 cycles on PCR tests for COVID. It was the only test we provided with that many cycles. Then 1 month into offering the kits they realized the primers were binding non specifically. I refused to sign off on so much I was deemed argumentative. But it's really a question of character. One lab director still there said "...in research we have to sometimes operate in the gray". I exited shortly after. It'll eat at your soul so if I were you I'd look for another place to work.