Enterprise

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Emerging technologies meet both advocates and resistance as users weigh the potential benefits with the potential risks. To successfully implement new technologies, we must start small, in a few simplified forms, fitting a small number of use cases to establish proof of concept before scaling usage. Artificial intelligence is no exception, but with the added challenge of intruding into the cognitive sphere, which has always been the prerogative of humans. Only a small circle of specialists understand how this technology works — therefore, more education to the broader public is needed as AI becomes more and more integrated into society.

I recently connected with Josh Feast, CEO and cofounder of Boston-based AI company Cogito, to discuss the role of AI in the new era of work. Here’s a look into our conversation.

Igor Ikonnikov: Artificial intelligence can be an incredibly powerful tool, as you know from your experience founding and growing an AI-based company. But there are plenty of people who have expressed concerns around its impact on the workforce and whether this new technology will replace them one day. So let’s cover that topic first: Do you have any concerns about AI coming for jobs?

Josh Feast: You’re right, this question has been asked many times in recent years. I believe it is time to focus on how we can shape the AI and human relationship to ensure we’re happy with the outcome, rather than being bystanders to an uncertain future. What I mean is, we’re living in a world where humans and machines are and will continue to work alongside each other. So, instead of fighting technological progress, we must embrace and harness it. Our emotionality as humans will always ensure we remain key assets in the workplace, even as companies deploy AI technology to revolutionize the modern enterprise. The idea is not to replace humans but to augment — or simply help — them with technology.

David De Cremer, Provost’s Chair and Professor at NUS Business School, and Garry Kasparov, chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative, agree. They previously explained, “The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities — but, in reality, they don’t. AI-based machines are fast, more accurate, and consistently rational, but they aren’t intuitive, emotional, or culturally sensitive.” It is in combining the strengths of AI and humans that we can be even more effective.

Ikonnikov: The last 15 months have been disruptive in many ways — including the steep increase in both the value of in-person interactions and the need for higher degree of automation. Is this the opportunity to combine the strengths?

Feast: More than a year in, with remote work now a norm for millions of people, almost everything we do is digitized and mediated by technology. We’ve seen improvements in efficiency and productivity, but also a growing need to fill the empathy deficit and increase energy and positive interactions. In other words, AI is already working in symbiosis with humans, so it’s up to us to define what we want that partnership to look like going forward. This consideration requires an open mind, active optimism, and empathy to see the full potential of the human-AI relationship. I believe this is where human-aware technology can play a big role in shaping the future.

Ikonnikov: Can you elaborate on what human-aware technology is?

Feast: Human-aware technology has the ability to sense what humans need in the moment to better augment our innate skills — including the ability to respond to and support our emotional and social intelligence. It opens new doors for technological augmentation in new areas. An example of this today is “smart” prosthetics, which lean on human-machine interfaces that help prosthetic limbs truly feel like an extension of the body, like the robotic arm being developed at Johns Hopkins Applied Physics Laboratory. Complete with humanlike reflexes and sensations, the robotic arm contains sensors that give feedback on temperature and vibration, as well as collect the data to mimic what human limbs are able to detect. As a result, it responds much like a normal arm.

The same concept applies to humans working at scale in an enterprise — where a significant part of our jobs involves collaborating with other people. Sometimes, in these interactions, we miss cues, get triggered, or fail to see another person’s perspective. Technology can support us here as an objective “recognizer” of patterns and cues.

Ikonnikov: As we continue to leverage this human-aware AI, you’ve said we must find a balance between machine intelligence and human intelligence. How does that translate to the workplace?

Feast: Finding that balance and optimizing for it to successfully address workplace challenges requires several levers to be pulled.

In order to empower AI to help us, we must actively and thoughtfully shape the AI — the more we do so, the more helpful it will be to individuals and organizations. In fact, a team from Microsoft’s Human Understanding and Empathy group believes that, with the right training, “AI can better understand its users, more effectively communicate with them, and improve their interactions with technology.” We can train the technology through similar processes that we train people with — rewarding it on the achievement of external goals like completing a task on time, but also on the achievement of our internal goals like maximizing our satisfaction, otherwise known as extrinsic and intrinsic rewards. In giving AI data about what works for us intrinsically, we increase its ability to support us.

Ikonnikov: As the workplace evolves and AI becomes more ingrained in our daily workflows, what would the outcome look like?

Feast: Increased success at work will come when organizations leverage humans, paired with AI, to drive an enhanced experience in the moments that matter most. It is those in-the-moment interactions where the new wave of opportunity arises.

For example, in an in-person conversation, both participants initiate, detect, interpret, and react to each other’s social signals in what some may call a conversational dance. This past year, we’ve all had to communicate over video and voice calls, challenging the nature of that conversational dance. In the absence of other methods of communication such as eye contact, body language, and shared in-person experiences, voice (and now video) becomes the only way a team member or manager can display emotion in a conversation. Whether it’s a conversation between an employee and customer or employee and manager, these are make-or-break moments for a business. Human-aware AI that is trained by humans in the same way we train ourselves can augment our abilities in these scenarios by supporting us when it matters and driving better outcomes.

Ikonnikov: There has been a big shift in AI conversations recently as it relates to regulations. The European Union, for example, unveiled a strict proposal governing the use of AI, a first-of-its-kind policy. Do you think AI needs to be regulated better?

Feast: Collectively, we have an obligation to create technology that is effective and fair for everyone — we’re not here to build whatever can be built without limits or constraints when it comes to people’s fundamental rights. This means we have a responsibility to regulate AI.

The first step to successful AI regulation is data regulation. Data is a pivotal resource that defines the creation and deployment of AI. We’re already seeing unintended consequences of unregulated AI. For example, there isn’t a level playing field across organizations when it comes to AI deployment because there is a stark difference company-to-company based on the amount and quality of data they have. This imbalance will impact the development of technology, the economy, and more. We, as leaders and brands, must actively work with regulatory bodies to create common parameters to level the playing field and increase trust in AI.

Ikonnikov: How can creators of AI technology earn that trust?

Feast: We have to be focused on implementing ethical AI by delivering transparency into the technology and communicating a clear benefit to all users. This extends to supplying education and upskilling opportunities. We also have to actively mitigate the underlying biases of the models and systems deployed. AI leaders and creators must do extensive research on de-biasing approaches for examining gender and racial bias, for example. This is an important step to take on the path to increasing trust in AI and responsibly implementing the technology across organizations and populations.

We also must ensure there is opportunity given to creators of AI who are diverse themselves — who have diverse demographics, immigration status, and backgrounds. It is the creators who define what problems we choose to address with AI, and more diverse creators will result in AI addressing a broader range of problems.

Without these parameters — without trust — we can’t fully reap all the benefits of AI. On the flip side, if we get this right and, as creators of AI and leaders of related organizations, do the work to earn trust and thoughtfully shape AI, the result will be responsible AI that truly works in symbiosis with us, more effectively supporting us as we forge the future of work.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member