The AI Tools That Actually Work

I've tried dozens of AI tools over the past two years. Most are gimmicks. A few are genuinely transformative. Here's what survived in my daily workflow.

Tier 1: Can't Work Without

Claude (Anthropic)

My primary AI assistant for coding, writing, and thinking. I use Claude Code for:

  • Implementing features from specs
  • Debugging complex issues
  • Code review and refactoring
  • Writing technical content (like this blog post)
  • What makes Claude different: it reads my entire codebase, understands context, and writes code that fits my patterns — not generic boilerplate.

    Cursor

    AI-native code editor. The tab-completion is good, but the real value is Cmd+K for inline edits and the chat panel for asking questions about your codebase.

    Lovable (lovable.dev)

    My secret weapon for rapid prototyping. I describe what I want, Lovable generates a working React + Supabase app. I then customize and polish. It's cut my initial setup time from hours to minutes.

    Tier 2: Very Useful

    TanStack Query + AI

    Not an AI tool itself, but AI-generated TanStack Query hooks are remarkably good. The pattern is so consistent that AI nails it every time.

    GitHub Copilot

    Still useful for autocomplete, especially for repetitive patterns. But Claude Code has replaced most of what I used Copilot for.

    What Didn't Stick

  • AI-generated tests — They look right but test the wrong things. I still write tests manually.
  • AI code review bots — Too many false positives. Human review is still better.
  • AI project managers — Good for brainstorming, bad for actual project management.
  • The Meta-Lesson

    AI tools work best when they augment a skill you already have. If you're a good developer, AI makes you faster. If you're learning, AI can teach you — but you need to understand what it generates, not just accept it.

    The developers who'll thrive are the ones who know when to use AI and when to think for themselves.