Why the Way We Talk About Artificial Intelligence Matters
It wasn’t reporting on AI that first got me into artificial intelligence. I got interested in AI the way a lot of us do – watching movies and TV when I was a kid. I loved watching Data on Star Trek and C-3PO in Star Wars. I loved the first two Terminator movies – especially the 2nd one where (spoiler alert!) Arnold Schwarzanegger is on our side. I was probably six or seven when my mom read me the Isaac Asimov Foundation series.
To tell the truth, these stories are a big part of why I studied Applied Mathematics at UCLA, why I am getting a Masters in Public Policy at Georgetown, and why I’ve spent so much time studying technology policy. I care about what the future looks like, and I want to make sure we do a good job building it. That said, while taking inspiration from fiction about what the future could look like (think Star Trek, not the Terminator) is great, it can be dangerous when we let those fantasies creep into the way we think about the real challenges and opportunities we may face using AI.
Chat GPT isn’t conscious, like Data or C-3PO. And AI isn’t a government plot gone wrong, like the Terminator. Language matters. When people talk or write inaccurately about AI, it feeds an incorrect public view that can lead to unnecessary fear, bad policy, and an inaccurate perception of what risks we should prioritize.
Since September, I’ve worked as a Google Public Policy Fellow at Aspen Digital. One thing we’ve emphasized in my time here is shifting the public conversation on AI to better reflect what AI actually is and what it means for people. In pursuit of this effort, Aspen Digital published three introductory primers on artificial intelligence to help journalists reporting on AI, detailing what it is, how it works, and who creates it.
Today, we’re announcing the 2023 winners of our AI Reporting Hall of Fame, highlighting the best writing about AI tools in action from 2023.