Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.
I’ll bet people said the same thing when Intellisense started suggesting lines completions.
And when errors were highlighted in the code rather than console output.
And when high-level languages started appearing.
This really isn’t a good comparison at all. One gives you a list of choices you can make, and the other gives you a blind answer.
If seeing what argument types the function takes make me a worse engineer, so be it, I guess
I’m sure many did, but I’m also pretty sure it’s easy to draw a line between code assistance and LLM-infused code generation.
They did.
Yep.
And yes.
That said, if you believed my mentors, we were barelling towards a 2025 in which nothing running on software ever really worked reliably.
So they may have been grumpy, but they were also right, on that point.
I mean with the “move fast and break things” mentality of most companies nowadays, I’d say he was spot-on
And they may have been right. But getting code is usually the end result, not proving you’re some better programmer. And useful tools may be used to help you with the aforementioned goal.