Ryan Merkley is currently a senior tech fellow with Aspen Digital where he spent nearly two years as the organization’s managing director. Last year, he founded Conscience, an emerging biotech non-profit, using AI and collaborative science to address areas of market failure in drug discovery. He’s had a storied career in technology and the public sector, serving as CEO at Creative Commons, COO of Mozilla, and as a Senior Advisor to Mayor David Miller in Toronto, where he led budget policy and founded Toronto Open Data. We caught up with Merkley from his home in Toronto to talk AI, elections, and to unpack a new report he worked on that has just come out from the Council for Fair Data Futures
To start the conversation with the new publication out of the Council for a Fair Data Future, what exactly is fair data?
The Council for a Fair Data Future is trying to answer a simple question with what can feel like a complicated answer. The question is: Is it possible for communities and individuals to benefit more from the data that’s collected about them in the course of their use of apps and the internet?
Every day, from your phone to your computer to walking in the streets, there’s data being generated about you and your community. That data is collected and resold and used to provide services and to sell things. Now, that data could also be useful in spheres like public health or urban planning, but it’s really rare for that data to be available to the communities it’s gathered from and even more rare for it to benefit them.
What did you recommend we do about this asymmetry in data fairness?
Turns out it’s a great big hairy problem, which is the best kind of problem for Aspen to look at—I think Aspen is best at complex and chaotic and multifaceted questions. But look, there aren’t any easy answers, and most of the answers we came to are about tradeoffs and a balancing of interests and views.
The output of the council was a report with a set of recommendations focusing on a particular part of the industry, namely philanthropy and its role as a funder of projects that generate data as a byproduct of their ongoing work.
Too often, the data produced as an output of philanthropic investment is not accessible either to the philanthropy or the people from which it was collected. And that’s because grantees are (rightly) focused on serving communities, not collecting and stewarding data, and philanthropic organizations aren’t asking them to do it.
Our recommendation is that the kind of communities that philanthropy wants to help should have the data they could then use to benefit themselves. Too often that data is behind closed doors and not accessible. The goal is to get more benefit out of what philanthropy is already spending on—especially since that data often sits there because analyzing and sharing it back is not really the business that a lot of philanthropies are in.
Aspen Digital has several priority areas, but in which one are you most hopeful can have a transformational impact soon?
The work that the team on AI and elections is doing will have an urgent and immediate impact on our democracy. In the leadup to the last general and midterm elections, the Aspen Digital team did a lot of work to help people understand the impact of tech on elections, in particular focusing on how the media should understand what’s happening in the election to help them cover it better.
Senior journalists have told us: “You meaningfully impacted how we covered and understood this election. You helped us do a better job, you helped us understand issues like election audits, AI, and cyber security.”
What have you learned at Aspen that’s really pushed your work at Conscience forward?
The secret sauce of Aspen, and what made me want to stay connected even after I started this new organization, is that this is a place where we get to grapple with the toughest issues of our time in an environment where people will pay attention to what we do. I hear people telling us they rarely see civil society at the table with industry at the table with government all having serious conversations about relevant issues. There just aren’t enough rooms like that.
I now work in open science, trying to use radical collaboration to solve problems that the market is failing to solve, particularly in drug discovery. Often that’s done exclusively through a nonprofit model, but I have big pharma at the table with us. That’s something I learned from Aspen. That’s a thing we do in our work every day.
You’re helping host a convening called The Second Order Effects of AI in Hawaii in August. What kind of conversation are you hoping to have?
The conversation we’re hoping to have lives in space called foresight. That’s trying to imagine long-term scenarios. Going from if this then that, and then really playing it out.
In the early 2000s when the iPhone came out and smartphones and the app-based universe began, we didn’t fully imagine the ways in which we’d be superconnected and the barriers that would break down. Nor did we fully grasp the cascading effects of being always connected and what it would do to everything from the global economy to our individual attention spans.
The thing we’re interested in spending time on with this really interesting group of multi-disciplinary thinkers is: What does it mean when all these AIs become cheap, accessible, and ubiquitous? Today, lots of AI is still crappy and unreliable. But it’s clearly useful in a way that let’s say blockchain wasn’t. Crypto people will come at me with pitchforks, but we’ve spent a decade trying to find a problem that blockchain will solve. But AI is so obviously useful even though it’s still kind of garbage.
So we’ll try to imagine a future where AI becomes good and useful, and we can start to ask questions like ‘Will this be the end of certain kinds of human labor? Will AI hollow out the middle class?’ If human drudgery and the myriad of daily tasks done by mostly middle class white collar workers at computers every day gets eliminated, what happens to our economy, tax-base, and society? If you replace a piece of an industry with AI, do you wind up hollowing out the whole thing?
I don’t know yet, but part of what we want to try to do is play things out in a scenario example where we say, if this happens then what happens and giving ourselves permission to play it out over time.