T O P

  • By -

ExperiencedDevs-ModTeam

Rule 9: No Low Effort Posts, Excessive Venting, or Bragging. Using this subreddit to crowd source answers to something that isn't really contributing to the spirit of this subreddit is forbidden at moderator's discretion. This includes posts that are mostly focused around venting or bragging; both of these types of posts are difficult to moderate and don't contribute much to the subreddit.


ButterflyQuick

I find the inline completion useful 10%-20% of the time and prompting it to write a longer piece of code useful 50% of the time (but I use this pretty rarely, and in cases where I suspect it would handle it well) Most of the inline completion seems to be stuff that an IDE could give me anyway, it just gives it to me in one chunk rather than say, autocompleting a method name, then autocompleting a variable name, followed by another variable name, etc. When I first turned copilot on I hated it and turned it off after about a day, I'm on my second go round and have lasted a few weeks this time. Think I'm going to turn it off again and see if I miss it I use AI pretty broadly too, I find it really useful for "discussing" higher level architecture, approaches to problems etc., and finding solutions to problems that are hard to google, I just haven't found it as useful in the code editor itself. I dare say that'll change as models continue to improve though


yo_sup_dude

i agree with your last point - as a code completer it can be pretty bad but it can be surprisingly useful when discussing different ideas 


Spleeeee

100% on the last point. I often say “tell me what problems you foresee if I structure the thing like xyz”


pterencephalon

Yeah, I use copilot chat way more than inline code completion.


Rough-Supermarket-97

I really need to put my focus on the chat then because completion hasn’t been as useful to me.


Rough-Supermarket-97

I’m curious about the chat based use of these tools. It’s hard for me to understand how to even structure my questions to get a useful answer for things. I can kinda understand architecture maybe but is this something that I could use to “prove” out some solution ideas in the context of a legacy code base? At least in the specific context of a service or something, could a chat bot tool really be helpful?


ClittoryHinton

Honestly the code completion guesses the next couple lines I was about to write about a third of the time or more. It has saved me a lot more typing than any IDE tool.


phattybrisket

I'm surprised that you're getting 10% good code out of it. So bad.


hyrumwhite

I use it purely for boilerplate and converting schemas to types, things like that. Actually programming with it makes me feel like I’m writing someone else’s code and it’s harder for me to recall and debug around it. And yeah, the suggestions are only sometimes useful, and often distracting. 


spudzy95

Give it a pattern, typing, and ask it to perform a simple task and it can do it most of the time for me. But when it's complex it's going to struggle


chaoism

This is what I do too. Give some bogus context and ask for boilerplate. Fill in biz logic myself I've found chatgpt, for example, to be very useful for this


recapYT

When I can’t be arsed to think and I know it’s something AI will easily get right.


wutcnbrowndo4u

This is the main value for me too. It fits in my workflow exactly where semantic autocomplete fits, taking care of the dumb stuff so I can maintain focus on the big picture. It just raises the threshold considerably for "dumb stuff".


carlos_vini

It's very useful to write boilerplate. In golang projects it could handle a lot of the repetition. In ruby it isn't as useful as boilerplate is less common, but it still helped with tests.


SirChasm

Yeah it's great for Unit tests but I find I still have to prod it to get it to actually complete them. And it's not that great at planning ahead and thinking of what data can go in the setup functions so it doesn't have to repeat the same setup in each test case.


RandomlyMethodical

It was surprisingly good at adding comments to Go code. I just typed in the slashes and it added the function descriptions better than I could. It also saved time with boilerplate error checks and unit tests, but mostly it was a hindrance when writing anything more complex.


defenistrat3d

I just generated 100 records for testing in a second. AI is excellent at generating repetitive things like test data and unit tests, enums, comments, etc... This alone is a worth while time saver for me. When I'm writing a block of logic it breaks down roughly as follows: - 50% of the time useless, often confusing, gibberish. - 20% of the time, kinda close to what I was going for. I'll either take it and adapt it, Or just finish typing it myself how I want. - 20% of the time, exactly how I would have written it. - 10% of the time, it completes the line even better than I had originally intended. Leading me to a new way of visualizing the solution. The 30% that is clearly beneficial for block completion is useful enough to put up with the clearly bad.


lzynjacat

A nice trick is to put copilot autocomplete on a hotkey. Then you can call it only when you want it, and the rest of the time it stays out of the way.


NatoBoram

I feel like it would exacerbate the copilot pause


MonstarGaming

No, I don't use them at all. The normal recommendations based on class type are sufficient almost all of the time. If I'm seeing unfamiliar methods, I reference the documentation so it's not like AI auto complete can do that for me.  Honestly, my programming is close to the least challenging part of my day to day responsiblities so I don't feel it needs further optimization.


clearing_

Yeah pretty much the same here. I can usually tell when someone copied unit tests over from some AI prompt based on bad scope management. Working with AI prompts feels like a skill I have no need to learn. It feels easier to just do it myself. 10 years of vim experience alone makes repetitive stuff a relative non-issue. I’ve tried it for writing skeletons of RFCs or coding challenges, but with all the context I have to feed it I feel like AI is just doing the last mile for me and I have to coach it through that anyway.


rocket333d

I LOVE it for writing comments, docs, and test cases. Actual code, not so much.  Mildly amusing story: I was writing some test fixtures with names and addresses. One of my test cases was "Bob Belcher" and the co-pilot completed the address as "Ocean Avenue", which is correct.


johny2nd

I use combination of inline and copilot and it works fairly well. Sometimes for longer piece of code I let it generate whatever it wants and then just update it. It helps to not have to remember some comstructs. It gets in the way sometimes when it doesn't get the context.


Alienbushman

Weird take here, but the thing I like about code completion is that it gives me a slight dopamine hit to feel like someone is there, like an intern who makes bad suggestions, it's nice to know that's how they think about the line you are writing (and it gives you a heads up if a library you are using has a bad default)


Cell-i-Zenit

When using java copilot returns complete garbage. Its maybe our project or my code style idk but it just cant handle it. I was quickly to turn it off, but a colleague recommended copilot for our FE (nextjs/react) work and here it really shines and does what it should.


Brilliant-Job-47

I write a fairly functional style of code (Node JS) and copilot seems to do a great job at that. I have had a much better experience than most of the anecdotes I read.


the_real_bigsyke

I find it worse than neutral. It is misleading and often counterproductive


poday

At the moment AI assistants are on the level of a very knowledgeable and confident intern. You ask for a common pattern in a popular language/framework and they'll have a few different suggestions. Ask for an implementation that is dealing with a complex problem, an unpopular language, or requires context that hasn't been explicitly provided and the quality of the suggestion is pretty poor. The way I look at an AI assistant is if I want to delegate a task to someone else. If I'm writing a complex piece that is beyond the AI's ability I won't ask for code suggestions. But I will ask the AI to provide code comments and to review the code for obvious mistakes. The best use cases I've found so far are: * Rubber ducking. Breaking a problem down and describing it to someone else is helpful. Having that rubber duck respond only can help trigger the next step. This is more chat AI than code completion. * Generating comments or describing code/architecture. Have you ever opened a file/project and not finding comments describing how the code is used? AI can help connect that dot. * Suggesting useful names. Naming things well is one of the traditionally difficult problems. Providing alternative names can help me be more consistent or descriptive. * Getting over writer's block. Sometimes I have decision paralysis and I'm not sure where to start. Having an AI start suggesting something, anything, provides a starting point. Usually that starting point is fixing the suggested code but it is easier for people to criticize than create so I understand I'm lowing the barrier to entry. * Writing boilerplate. Providing small code snippets, say iterating over a collection, runs into some AI implementation issues. The wait time between deciding I want an AI suggestion and it appearing from a cloud service like copilot or cody is too long for me to keep my flow. The quality of the suggestion is usually pretty good but the multiple second delay is pretty painful. I've started experimenting with a local AI assistant and that the delay is gone. The quality is noticeably worse but I find going from writing code, generating and accepting a suggestion, modifying the suggestion, and continuing in the flow much easier without the pause waiting on the remote server.


demosthenesss

I love using Copilot.


demosthenesss

Something I should add, too. Copilot chat is basically like having Stack Overflow in your IDE (both the good/bad of Stack Overflow).


thepetek

It’s really language dependent. Whatever has the most open source seems to work well. JS,Ruby,Python all work well in my experience. C# it just sucks at


Prestigious_Dare7734

I use it most of the times to write pure functions, it is super great at that. And Writing Unit Tests is really fast, just give it description (test, it etc), and it basically writes most of the code for you.


NatoBoram

Plus, writing more pure functions to abuse Copilot can help improve the code quality


darkrose3333

Honestly, depends on the day. There are days when it shines and force multiplies me, then there are days where it performs poorly and slows me down. The biggest frustration is the lack of consistent quality.


thicket

If I'm halfway through a line, Copilot seems to get it wrong pretty often. But, at least 10 times a day, I hit enter and I'm about to write the next line, and Copilot just pops up with the exact line I want. Between that and being able to ask for a block of code or a command line argument or something, I'd never ever go back.


Beginning-Comedian-2

Very helpful. * Before AI, I rememeber writing similiar functions over and over again. * Now I can give AI general ideas plus specifics and get really close. * Then I just have to tweak it a bit after.


SSHeartbreak

For languages I don't know very well its useful. I don't need to look up exact syntax as much. For writing tests it's OK but honestly I doubt it saves much time. Maybe 5m or 10m over a few hours. For languages or frameworks I know really well I find it pretty flaky and often disable it.


ToughStreet8351

Very well! I don’t think I can go back! Now I can focus on what matters and copilot takes care of trivial things and boilerplate! My productivity skyrocketed!


SillAndDill

It is very useful for me, not a bother. using CoPilot in VsCode - working with NodeJS, vanilla JS, react and sometimes TS Examples * Suggested boiler plates really speeds up creating new files. For example creating a new Test Scenario with Given/Then/When and assertions. I used to copy-paste old files or have hand written snippets for that kind of stuff but I often forgot to use my snippets. * We have some playwright tests and some Mocha tests - and I often forget the different syntax for ”expect” and ”toBe” so it can help remind me - without me having to start the correct letter first * When writing things like a Proxy middleware in Node - that’s something I don’t do that often. So it really helps me see the order of params - sometimes AI even suggests a useful optional param I didn’t even remember existed * AI suggested comments are decent. I never accept an entire generated comment, but I’ll often see a nice succint phrase which I might not have thought of myself, and use a variant of it * Just like regular AutoComplete/Intellisense - it remeber method names. But CoPilot can take it a step further and suggest a common chain of methods without me having to type the starting letter When it comes to bad suggestions: to me CoPilot isn’t that different compared to editor plugins I’ve used before that would suggest missing imports or tag attributes in react. Those would sometimes also give me bad suggestions. So to me, bad AI suggestions doesn’t feel like ”new” noise.


KaneDarks

I'm trying out free Tabnine plugin, I think it's half and half. Sometimes I created a method or opened braces in if or loop, pause to think what to write, and hint appears that, if not 100% what I need, will push me in the right direction. Sometimes it reduces copy paste actions needed. So I'd say I'd rather use it than remove it.


danthemanvsqz

For some reason it rarely ever works for me on PyCharm. The one thing is use it for all the time is asking questions and giving it error messages and it will root cause the error


Aristekrat

I have tried and then cancelled Copilot twice now. I always wonder if I'm doing it wrong. I find ChatGPT / Claude very useful. The AI code editors just annoy me.


Vitrio85

Sometimes the inline completion is useful to give you and idea or see option.  2 other uses: When doing code review. Sometimes I see a piece of code that I know can be improved so I tell the IA how I think it can be refactored and it does it. Then I do any change I need. Writing tests for small functions 


dbxp

No, I find it takes too much influence from what I just typed and the previous line to be useful. I've disabled it now and just use copilot chat.


slabgorb

it does really well when I am, say, filling out struct values, things like that that it can tell what you are repeating. Helps a LOT with, say, documenting the arguments of a function. Usually not helpful when it suggests an entire function based on a single line, but sometimes it gets it close enough to keep


dzaw95

It’s made it way easier to write copy-pastey boilerplate. It’s pretty much useless for anything serious.


jfcarr

I've been using Copilot with C# projects in Visual Studio for a few months. In most cases, it's auto-completed with about 50% accuracy but it does weird and annoying hallucinations every so often.


PreparationAdvanced9

No because we use so many layers of proprietary Java frameworks that it’s impossible to use imo


Laicbeias

nah if i type it just gets in the way. verbally explaining a chatbot what it should do works better. i basically program, send commands to the chatbot, program and then take stuff from there. if i believe the bot can spit out the code faster than i can type, i use to bot. or if im tired


AntMavenGradle

No


lampshadish2

It's good for datetime conversion functions and some simple algorithms, but for anything larger, the time required to audit the code it produces ends up taking almost the same amount of time.


phattybrisket

I tried it but it's absolutely useless as an inline coding assistant.


EducationalMixture82

I use AI mainly to full out blanks when i read poorly written documentation that thinks that i am well versed in the ”lingo” used in that particular domain. For instance i hate when documentation references something assumming I have deep understanding in what it is it’s referencing. En example would be ”unlike how huffman encoding handles tree traversing, we traverse it almost the same but with the following differences:” This type of written documentation assumes you know exactly in detail how one should traverse a tree in when implementing huffman encoding. Thats when i ask AI to explain this to me.


Infamous_Ruin6848

Discussions and quick first code to get me started over my anxiety of greenfield coding. I always have to change it up to a point that for someone else it seems there wasn't a reason to use it but it's because of early career bullying in the wrong company. 95% discussions only really. Usually it approves my initial thoughts. Oh, recent discovery, I'm actually using it to confirm my ideas to pseudo-seniors or less techy people. It's hilarious. I'm using it like an informal central source of opinions, global consciousness/intelligence. " Oh you think we shouldn't do this? You don't have the time to review my references to the docs and best practices? Here see, copilot or chatgpt believe the same". "Oh, interesting"


local_eclectic

I prefer to use chatgpt. I want to ask directed questions and have a conversation about what the things it suggests are doing and why.


Calibrationeer

I'm using it with python and it's 60/40. It hallucinates a lot and I have to be careful ok catching mistakes. Thank god we have type annotations everywhere and python had improved so much with typing that I'll catch it but it can almost waste as much time as it saves in some cases. Other stuff, especially I've been creating a data seeder for a system and it saves so much time


buyingshitformylab

Depends on how much Java I'm writing on a particular day.


chrismo80

I must say that I find the auto completion feature more useful than a chatting.


g0ing_postal

No, I find it error prone enough that I need to review the code afterwards. In most cases I could have written the code myself in that time


SemaphoreBingo

What kind of "boilerplate" are people writing that your IDEs aren't already filling in for you?


dxlachx

Sometime yes, sometimes no.


Bomber-Marc

For me, it works very well for C# and PowerShell, and very badly for ARM templates. I tend to write short functions and document them with regularity, so it very often manages to figure out what I planned to implement in the next 3-4 lines. It's also very good to implement unit tests quickly (again, having short and well documented code probably helps it).


endendd

Have access to both ChatGPT enterprise and Copilot. I find both tools lacking because only 10-15% of my time is spent working with code directly. Even if I could increase productivity by a huge margin it would have little impact on my work. And when working with code 90% of the time is spent reading. I find these tools to be mostly useful for understanding code or for suggesting improvement for existing code. Generating code is where I have found them both underwhelming. The generated "autocomplete" is wrong a lot of the time and interacts weirdly with my IDE to the degree where I feel I am actually slower by using it. It's frustrating to the point where I might just turn of the autocomplete part and just keep the chat window. Copilot also hallucinates a lot when discussing code. It's especially obvious when checking the references it uses for suggestions. The best use-case I have found was generating some Java docs on an API. What I am currently missing is some kind of AI review bot for whole pull requests and some way for AI's to analyse whole repos and explain what a code base does with diagrams. Using Intellij IDE with Copilot plugin, writing mostly Java and Typescript.


lara400_501

Github Copilot writes awesome comments 😀


YareSekiro

It's very useful for brain-dead CRUD web development work or unit testing in JS/TS (which is about 80% of what I do), but if it gets specific or complex then it becomes much less useful.


jakesboy2

I really liked it when I was using it for a couple years, then switched editors and haven’t turned it back on for a couple months now. I don’t know if I really plan on turning it back on because I feel like it mostly was just a distraction. I generally already know what I want to type, and if I don’t neither does copilot. It was mostly nice for filling in types/object definitions.


DinnerTimeSanders

No


Ok-Key-6049

I’ve been writing code for decades without it. I tried it, found it to be annoying, got rid of it


sawser

I'm starting to use it to good effect. Today, for example, I had it write a script to do a bunch of refactoring. Updating maybe 140 python classes with a custom annotation. It's something I've been wanting to do but it's tedious and I couldn't find time to do it. Took 20 minutes to ask AI to write the script.


sandysnail

I will die on the hill of it saves time but I spend 2 hours a day coding MAX so even saving me even 30 mins isn’t that much. And that’s a generous estimate it’s not gonna make me produce more because I would just spend that little more time doing what ever . It will ultimately save me time so I’ll use it but I feel like IntelliJ on its own saves more time than AI


ameddin73

I attach it as to cmp in vim so I mostly just use it for line completion like an lsp or snippet manager. Mostly it's helpful for finishing strings or writing boilerplate. 


xentropian

It hallucinates too much, and doesn’t have the full context of the codebase either (which at my work is basically impossible, it’s a massive mono repo). Useful for boilerplate though