One Productivity Boost of AI in Engineering (It's Not Just Code Generation)
Recent studies and discussions, like GitClear's research on AI Copilot effects, often focus on code generation speed versus quality impacts like increased copy-pasting or decreased refactoring. While these are valid concerns, it's worth highlighting one area where AI tools are providing a significant, less controversial productivity boost today: research.
Generally speaking, one of the biggest time-savers AI offers engineers currently is in tackling repetitive research tasks. Especially when integrating new libraries, SDKs, APIs, or unfamiliar technologies, developers spend considerable time:
- Searching documentation.
- Comparing results with other examples online.
- Diagnosing errors when those examples don't quite work in their specific context.
AI assistants, through features like autocomplete and prompt integration (like in Copilot), offer a fast shortcut compared to manually searching for information snippets and incorporating them into code. This can save a significant amount of initial manual process time.
However, this acceleration comes with caveats. Current AI tools often lack a deep grasp of the broader logic and context of the codebase. Without careful attention from the engineer reviewing the suggestions, it's easy to introduce subtle bugs, inconsistencies, or unwanted side effects. That initial time saved can quickly be lost chasing down the source of these unexpected results.
Furthermore, if an engineer doesn't have a solid understanding of the concepts being introduced by the AI, attempting to have the assistant fix its own mistakes can lead to a "deterioration loop" – fixing one bug introduces another, and so on.
While this might seem similar to guiding an inexperienced junior developer, there's a key difference. Well-guided junior engineers in a constructive environment typically learn, start to self-check, ask clarifying questions, and eventually break out of that loop. While we hope AI tools evolve into another powerful layer of abstraction, similar to how languages and frameworks have evolved, their current state requires significant human oversight and expertise to be used effectively and safely.
So, while celebrating the speed gains, let's recognize where the most reliable value often lies today – in accelerating the research phase – and remain vigilant about validating the generated output.