architecture.
27 writings found
Latest Archives
Permission Hungry Agents and the Return to First Principles
ThoughtWorks Radar 34 reveals AI's paradox: we're racing forward while rediscovering software fundamentals, and our security models aren't ready.
Permission Hungry Agents and the Return to First Principles
ThoughtWorks' latest radar reveals AI isn't just pushing us forward, it's forcing us back to fundamentals like clean code and security basics.
Reading Code in the Age of AI: Why Human Review Still Matters
ThoughtWorks Radar reveals AI's paradox: tools generate complexity faster than we can understand it. Time to revisit fundamentals.
The Permission Hungry Dilemma: When AI Agents Want Access to Everything
ThoughtWorks Radar 34 highlights a fundamental tension: the most useful AI agents need broad access, but our security guardrails haven't caught up yet.
The Virtue of Laziness: Why AI Threatens What Makes Us Good Engineers
LLMs lack the programmer's essential virtue of laziness. Without constraints, they generate complexity instead of elegant abstractions.
The Theater of Computation: What Alan Turing's Story Still Teaches Us About Building Systems
Watching Breaking the Code reminded me that the principles Turing fought for—elegant abstraction and human dignity—still matter in system design.
AI-Assisted Development: The Taste Problem
Why coding with AI agents works brilliantly for implementation but falls apart for API design. Lessons from building real systems with Claude.
When Machines Write Code, Humans Must Learn to Judge
As LLMs generate more code, teams face cognitive surrender and debt proliferation. The future isn't about writing code, it's about verification.
When Agents Write Code, We Judge It: The Verification Economy
As LLMs generate code at scale, our job shifts from writing to verifying. What does this mean for how we organize teams and think about programming?
Treating AI Instructions as Infrastructure, Not Documentation
How encoding team standards as versioned AI instructions solves the consistency problem that plagues AI-assisted development workflows.
Making Team Standards Executable: Infrastructure for AI-Assisted Development
AI coding tools produce wildly different results based on who's prompting. Treating team standards as versioned, executable instructions solves the consistency problem.
Tests Are the Real Safety Net: Why Your AI Specs Need Executable Validation
Writing specs for LLMs is trendy. But without automated tests, you're flying blind. Here's why the spec document isn't your safety net.
Code Review, Observability, and the Cognitive Cost of AI Amplification
Rethinking code review as product judgment, observability as our new IDE, and whether AI tools extend our capabilities or replace them entirely.
The Apprentice Gap: Why Watching AI Code Matters More Than Ever
As AI agents automate more development work, we're creating a generation gap where juniors never learn the fundamentals. The ralph loop offers a solution.
AI as an Organizational Multiplier: Why Your Team's Experience Varies Wildly
AI amplifies what you're already doing. Why some teams see half the incidents while others face double, and what agent architecture teaches us about control.