Enterprise

All the sessions from Transform 2021 are available on-demand now. Watch now.


Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners, whom we honored recently at Transform 2021. Check out last week’s interview with the winner of our AI research award. 

When you hear about AI ethics, it’s mostly about bias. But Noelle Silver, a winner of VentureBeat’s Women in AI responsibility and ethics award, has dedicated herself to an often overlooked part of the responsible AI equation: AI literacy.

“That’s my vision, is that we really increase literacy across the board,” she told VentureBeat of her effort to educate everyone from C-suites to teenagers about how to approach AI more thoughtfully.

After presenting to one too many boardrooms that could only see the good in AI, Silver started to see this lack of knowledge and ability to ask the important questions as a danger. Now, she’s a consistent champion for public understanding of AI, and has also established several initiatives supporting women and underrepresented communities.

We’re excited to present Silver with this much-deserved award. We recently caught up with her to chat more about the inspiration for her work, the misconceptions about responsible AI, and how enterprises can make sure AI ethics is more than a box to check.

VentureBeat: What would you say is your unique perspective when it comes to AI? What drives your work?

Noelle Silver: I’m driven by the fact that I have a house full of people who are consuming AI for various reasons. There’s my son with Down syndrome, and I’m interested in making the world accessible to him. And then my dad who is 72 and suffered a traumatic brain injury, and so he can’t use a smartphone and he doesn’t have a computer. Accessibility is a big part of it, and for the products I have the opportunity to be involved in, I want to make sure I’m representing those perspectives.

I always joke about how when we first started on Alexa, it was a pet project for Jeff Bezos. We weren’t consciously thinking about what this could do for classrooms, nursing homes, or people with speech difficulties. But all of those are really relevant use cases Amazon Alexa has now invested in. I always quote Arthur C. Clarke, who said, “Any sufficiently advanced technology is indistinguishable from magic.” And that’s true for my dad. When he uses Alexa, he’s like, “This is amazing!” You feel that it mystifies him, but the reality is there’s someone like me with fingers on a keyboard building the model that supports that magic. And I think being transparent and letting people know there are humans making them do what they do, and the more diverse and inclusive those humans can be in their development, the better. So I took that lesson and now I’ve talked to hundreds of executives and boards around the world to educate them about the questions they should be asking.

VentureBeat: You’ve created several initiatives championing women and underrepresented communities within the AI community, including AI Leadership Institute, Women in AI, and more. What led you to launch these groups? And what is your plan and hope for them in the near future and the long run? 

Silver: I launched the AI Leadership Institute six years ago because I was being asked, as part of my profession, to go and talk to executives and boards about AI. And I was selling a product, so I was there to, you know, talk about the art of the possible and get them excited, which was easy to do. But I found there was really a lack of literacy at the highest levels. And the fact that those with the budgets didn’t have that literacy, it made it dangerous that someone like me could tell a good story and tap into the optimistic feels of AI and they couldn’t recognize that’s not the only course. I tell the good and the bad, but what if it’s someone who’s trying to get them to do something without being as transparent? And so I started that leadership institute with the support of AWS, Alexa, and Microsoft to just try and educate these executives.

A couple years later, I realized there was very little diversity in the boardrooms where I was presenting, and that concerned me.  I met Dr. Safiya Noble, who had just written Algorithms of Oppression about the craziness that was Google algorithms years ago. You know, you type “CEO” and it only shows you white males — those types of things. That was a signal of a much larger problem, but I found that her work was not well known. She wasn’t a keynote speaker at the events that I was attending; she was like a sub session. And I just felt like the work was critical. And so I started Women in AI just to be a mechanism for it. I did a TikTok series on 12 African American women in AI to know, and that turned into a blog series, which turned into a community. I have a unique ability, I’ll say, to advocate for that work, and so I felt it was my mission.

VentureBeat: I’m glad you mentioned TikTok because I was going to say, even besides the boardroom discussions, I’ve seen you talking about building better models and responsible AI everywhere from TikTok to Clubhouse and so on. With that, are you hoping to reach the masses, get the average user caring, and get awareness bubbling up to decision makers that way?

Silver: Yeah, that’s right. Last year I was part of a LinkedIn learning course on how to spot deepfakes, and we ended up with three million learners. I think three or four of the videos went viral. And this wasn’t YouTube with its elaborate search model that will drive traffic or anything, right. So I started doing more AI literacy content after that because it showed me people want to know about these emerging technologies. And I have teenagers, and I know they’re going to be leading these companies. So what better way to avoid systemic bias than by educating them on these principles of inclusive engineering, asking better questions, and design justice? What if we taught that in middle or high school? And it’s funny because my executives are not the ones I’m showing my TikTok videos to, but I was on the call with one recently and I overheard her seventh grade daughter ask, “Oh my gosh. Is that the Noelle Silver?” And I was like, you know, that’s when you’ve got it — when you’ve got the seventh grader and the CEO on the same page.

VentureBeat: The idea of responsible AI and AI ethics is finally starting to receive the attention it needs. But do you fear — or already feel like — it’s becoming a buzzword? How do we make sure this work is real and not a box to check off?

Silver: It’s one of those things that companies realize they have to have an answer for, which is great. Like good, they’re creating teams. The thing that concerns me is, but like how impactful are these teams? When I see something ethically wrong with a model and I know it’s not going to serve the people it’s meant to, or I know it’s going to harm someone, when I pull the chain as a data scientist and say “we shouldn’t do this,” what happens then?  Most of these ethical organizations have no authority to actually stop production. It’s just like diversity and inclusion — everything is fine until you tell me this will delay going to market and we’ll lose $2 billion in revenue over five years. I’ve had CEOs tell me, “I’ll do everything you ask, but the second I lose money, I can’t do it anymore. I have stakeholders to serve.” So if we don’t give authority to these teams to actually do anything, they’re going to end up like many of the ethicists we’ve seen and either are going to quit or get pushed out.

VentureBeat: Are there any misconceptions about the push for responsible AI you think are important to clear up? Or anything important that often gets overlooked?

Silver: I think the biggest is that people often just think about ethical and responsible AI and bias, but it’s also about how we educate the users and communities consuming this AI. Every company is going to be data-driven, and that means everyone in the company needs to understand the impact of what that data can do and how it should be protected. These rules barely exist for the teams that create and store the data, and they definitely don’t exist for other people inside a company who might happen to run into that data. AI ethics isn’t just reserved just for the practitioners; it’s much more holistic than that.

VentureBeat: What advice do you have for enterprises building or deploying AI technologies about how to approach it more responsibly?

Silver: The reason I went to Red Hat is because I actually do believe in open source communities where different companies come together to solve common problems and build better things. What happens when health care meets finance? What happens when we come together and share our challenges and ethical practices and build a solution that reaches more people? Especially when we’re looking at things like Kubernetes, which almost every company is using to launch their applications. So being part of an open source community where you can collaborate and build solutions that serve more people outside of your limited scope, I feel like that’s a good thing.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member