A year ago, I was skeptical. I had tried GitHub Copilot early on and found it useful for autocomplete, but not much else. Then I discovered agentic coding tools — Claude Code, Cursor, and ChatGPT with code interpreter — and my entire development workflow transformed in ways I didn't expect.

This isn't a hype piece. I want to share concrete examples of how these tools changed my day-to-day work, where they shine, where they fall short, and why I think every software engineer needs to develop fluency with them.

What "Agentic Coding" Actually Means

The key difference between traditional AI autocomplete and agentic coding is autonomy. An agentic tool doesn't just predict the next line — it can read your codebase, understand context, run commands, edit multiple files, and execute multi-step tasks with minimal guidance.

When I tell Claude Code to "add offline support to the chat module," it doesn't just generate a code snippet. It reads the existing chat service, understands the data flow, creates an IndexedDB layer, modifies the service to check connectivity, updates the UI to show offline indicators, and even writes tests. That's a fundamentally different interaction model.

My Daily Toolkit

Claude Code — The Heavy Lifter

Claude Code has become my go-to for substantial engineering tasks. Refactoring a service layer, migrating from one pattern to another, implementing a feature across multiple files — this is where it excels. I use it directly in my terminal, and the fact that it understands my full project context makes it incredibly effective.

My typical workflow looks like this:

# Start a task with clear context
claude "Refactor the authentication module to use the
new token refresh strategy. The current implementation
is in src/auth/ and the new API spec is in docs/auth-v2.md"

What I love about Claude Code is that it thinks before it acts. It'll read the relevant files, form a plan, and then execute — often catching edge cases I would have missed in a manual implementation.

Cursor — The IDE Companion

Cursor lives in my editor and handles the more granular work. Inline completions that actually understand my codebase, quick refactors, generating unit tests for a function I just wrote. It's like pair programming with someone who has read every file in my project.

The "Cmd+K" workflow for inline edits is addictive. Select a block of code, describe what you want changed, and it just happens. I find myself using it most for:

ChatGPT — The Thinking Partner

I use ChatGPT differently than the other two. It's my rubber duck for architecture decisions, my research assistant for unfamiliar APIs, and my brainstorming partner when I'm stuck on a design problem. When I was planning the migration from Cordova to Capacitor, I spent hours in conversation with ChatGPT mapping out the approach before writing a single line of code.

Real Impact: Before and After

Here's a concrete example. Last month I needed to add push notification support to both our iOS and Android apps — native implementations using Swift and Kotlin respectively, plus the backend handler in NestJS.

Before agentic tools, this would have been a 3-4 day task. Research the native APIs, set up Firebase Cloud Messaging, implement the handlers, test on both platforms, handle edge cases like permission flows and token refresh.

With agentic tools, I completed the entire feature in about 6 hours. Claude Code handled the NestJS backend and the Kotlin implementation. I used Cursor for the Swift side since I wanted more granular control while learning SwiftUI patterns. ChatGPT helped me think through the notification grouping strategy.

The code quality was production-grade. Not "AI-generated-looking" code, but clean, idiomatic implementations that followed each platform's conventions. I reviewed everything carefully, made adjustments, but the foundation was solid.

Where They Fall Short

I want to be honest about the limitations:

The Skill That Matters Now: Prompt Engineering

The engineers who get the most out of these tools aren't necessarily the most experienced coders. They're the ones who can articulate what they want clearly and provide the right context. Prompt engineering is now a core software engineering skill.

Some patterns I've found effective:

Looking Forward

We're still in the early days of agentic development. The tools are getting better at an astonishing rate. A year from now, I expect the boundary between "what the AI writes" and "what I write" will be even blurrier — and that's a good thing.

My advice to developers who haven't adopted these tools yet: start now. Not because they'll replace you, but because the engineers who can effectively collaborate with AI agents will build better software, faster. It's not about writing less code — it's about solving more problems.