Good Luck, Have Fun, Don't Die: Hollywood Finally Gets AI Anxiety Right

Good Luck, Have Fun, Don't Die: Hollywood Finally Gets AI Anxiety Right

I wasn’t expecting a Gore Verbinski film to perfectly articulate everything wrong with the current artificial intelligence push, but here we are. Good Luck, Have Fun, Don’t Die hits theaters today, and from what I’m reading, it’s the first mainstream movie in years that actually understands the specific kind of dread developers and tech workers feel about where this is all heading.

The film follows Sam Rockwell as a time traveler from a machine-dominated future who crashes into a present-day LA diner, desperate to stop humanity from scrolling itself into extinction. It sounds like standard Terminator fare, but the anxiety it taps into is far more specific and current than killer robots. This is about doomscrolling. About algorithmic feeds. About the way we’ve built systems that hijack human attention and call it innovation.

The Screen Addiction We Can’t Talk About

What strikes me most about the premise is how directly it connects today’s attention economy to tomorrow’s existential threats. We’re all living this right now. I catch myself pulling out my phone between compile times, during meetings, while supposedly watching TV. The devices are always there, the feeds are infinite, and the friction to engage is zero.

The tech industry built this. We built this. Every product manager optimizing for engagement metrics, every engineer implementing infinite scroll, every data scientist tuning recommendation algorithms for maximum session time. We knew what we were doing, or at least we should have known.

Verbinski apparently uses visual hyperactivity throughout the film to mirror this overstimulation, which is either brilliant or exhausting depending on your tolerance for meta-commentary. But the point lands: we’re training ourselves and our society to be perpetually distracted, perpetually consuming, never actually present.

Hollywood’s AI Agenda Meets Its Counter-Narrative

The timing here is almost too perfect. Hollywood studios are currently pushing hard to normalize generative AI in production pipelines. Writers and artists just spent months striking over this exact issue. The industry wants us to accept AI as inevitable, as progress, as just another tool.

Then Verbinski drops a film where the villain is essentially unchecked technological adoption and our collective inability to pump the brakes. The movie includes creatures that look like “indictments of gen AI slop,” which is the kind of specific dig I didn’t know I needed.

I’ve seen the gen AI output flooding the internet. The soulless stock images, the generic blog posts, the videos that look almost right but feel deeply wrong. It’s content as pollution, and we’re all drowning in it. A film that visualizes this as literal monsters feels appropriate.

We Already Know Better But Do It Anyway

The most uncomfortable part of the film’s premise is that the characters know their screen addiction is harmful but can’t stop. That’s not science fiction. That’s Tuesday.

I have Screen Time limits set on my iPhone. I blow past them constantly. I have browser extensions that block social media. I disable them when I’m bored. The awareness doesn’t translate to behavior change because the systems are designed to be irresistible. That’s the actual product specification.

The character with Wi-Fi sensitivity who struggles to hold a job is played for drama in the film, but it’s a perfect metaphor for anyone trying to opt out of our hyper-connected tech culture. Try being a developer without Slack, without email, without constant connectivity. Try shipping products without cloud services and always-on infrastructure. The infrastructure of modern work assumes your perpetual availability.

The Unhinged Energy Feels Right

From the reviews, the film sounds messy. Multiple storylines, tonal shifts, a Rashomon structure that might be too clever for its own good. But honestly, that chaotic energy matches the moment we’re in. There’s no clean narrative about AI risk that satisfies everyone. The threats are diffuse, interconnected, hard to articulate without sounding like a conspiracy theorist or a Luddite.

Sam Rockwell’s character has made dozens of trips back in time and still doesn’t know which combination of people can prevent the apocalypse. That uncertainty, that desperate trial-and-error approach to averting catastrophe, feels more honest than the usual chosen-one narrative. We don’t know which interventions will matter. We don’t know who needs to be in the room. We’re just frantically trying things while the clock runs down.

The fact that he shows up with a bomb strapped to his chest is darkly funny. What else do you do when nobody will listen? When everyone agrees things are bad but nobody changes course? When the incentive structures all point toward accelerating straight into the wall?

I’m not saying the film is perfect or that it has answers. But in a moment when tech companies are asking us to trust them with increasingly powerful AI systems, when venture capital is flooding into generative models with no clear use case beyond “replace human labor,” when the default industry position is “move fast and let society deal with the consequences,” we need stories that give voice to the anxiety. We need permission to say the sky might actually be falling, even if that makes us sound hysterical.

Read Next