The incredible growth of the artificial intelligence industry in recent years has caused excitement, surprise, and more than a little fear.
The promise of new workplace efficiencies and automation comes with concerns about job losses. The incredible power of generative AI platforms means that both advanced analysis and plagiarism are easier than ever.
As AI-powered industries plunge headlong into an unknown future, the University of Virginia Darden School of Business is grappling with the many implications of AI’s perils and promise.
At a recent conference hosted by the LaCross Institute for Ethical Artificial Intelligence in Business, speakers from academia, government, and industry sought to make sense of a once-in-a-generation technological marvel and extraordinary ethical complexity.
Welcoming attendees to the newly launched institute’s first public event, Dean Scott Beardsley, whose career has focused not only on the ethics of AI but also on the regulation of the technology, said he wrote a book in the 1990s called “Broadband.” It reminded me of a paper entitled “Changes Everything.” We are now similarly approaching a time when AI may change everything, with consequences similar to the widespread adoption of high-speed internet, he said.
One of the major societal challenges is to ensure that the adoption and growth of AI technologies is developed in a responsible manner, especially given the many unknowns in the nascent field. It can never be taken for granted.
“Technology is a rapidly changing subject, and technological advances do not wait for certainty,” Beardsley said. “Many of us are waiting to see what will happen, but I believe that the only certainty is uncertainty. I am incredibly optimistic in some ways, but at the same time extremely optimistic. I’m worried.”
One of the challenges for individuals and organizations is to ensure that AI develops in a way that contributes to human well-being and fulfillment; that is, AI is “at the service of humanity” rather than “humans at the service of AI”. Beardsley said the goal is to ensure that
“Ethical AI Value Chain” was the theme of the conference, which provides fundamental information to enable leaders and academics to consider the business and ethical issues that are part of all aspects of the development and deployment of AI in business. It served as a framework.
From infrastructure and data, algorithms and applications, to impact and talent, and every step in between, business and ethical issues arise and are often in tension. These must be addressed if AI is to fulfill its promise ethically.
“We are suggesting that ethical AI is an outcome, not a feature, of a product,” said Mark Ruggiano (MBA’96), director of the La Crosse Institute. “This is the end result of addressing a series of considerations across the AI value chain and making trade-offs between them ethically.”
This conference was held to explore each step of the ethical AI value chain and encourage participants to develop an agenda to guide ethical AI efforts in the years to come.
Introducing accountability
In one of the day’s keynote speeches, Professor Kirsten Martin (MBA ’99, Ph.D. ’06) of Notre Dame Mendoza College, a leading expert on technology ethics, spoke about “value-laden” decisions. and drew attention to the ethical AI value chain. This is happening at almost every stage of AI development and deployment, and is at odds with the objectivity that much AI data claims.
He said technology companies often try to avoid accountability by claiming their algorithms are arcane “black boxes” and that their results are “objective, efficient and accurate.” .
“The idea that everything is more efficient – the idea of efficiency, accuracy, objectivity – has become so pervasive in our assumptions about AI that we have to wonder what it covers and what it does. You don’t even question whether they’re hiding it,” Martin said. The world is currently in the midst of a “hype bubble” around AI, with many in the industry positioning it as a decidedly positive thing.
Responsible development of AI involves true accountability by the developers of the technology, especially when companies exercise power over their stakeholders.
The development of new technology often involves companies attempting to avoid accountability for the negative impacts of that technology, but denial of responsibility is not a stopping point, but rather is fairly typical of new technology development. should be considered as a positive aspect.
“Being told that we are not responsible for these negative impacts should not be seen as an end point, but as an ongoing part of fulfilling our responsibility,” Martin said. “We should expect companies to be held accountable for decisions that affect other people.”
This increasingly pervasive topic, like AI, is disrupting traditional considerations of who is considered a stakeholder, Martin said. This is because AI involves a significant number of “conscripted stakeholders” – people who are not given the choice to participate in the company’s value creation. .
“We have a body of scholarship that assumes that all stakeholders are voluntary and mutually beneficial in the relationship. Otherwise they will leave. ” Martin said. “So what happens when there are stakeholders who are most affected by a company’s decisions, and they are not voluntary and the decision is not in their interest?”
When the “fundamental premise” of mutually beneficial value creation is challenged, the implications for management, leadership and academia are all significant, Martin said.
Strategic value of AI
Darden Professors Raj Venkatesan and Tom Davenport delivered separate keynotes exploring various aspects of AI capabilities and implementation within the enterprise. Venkatesan, author of The AI Marketing Canvas, shares examples of AI-related failures and companies that have leveraged AI capabilities to build personalized relationships with consumers and increase engagement. I introduced it.
Generative AI, fine-tuned by companies’ unique customer data, will increasingly be a source of competitive advantage, Venkatesan said, while companies interact with personal agent AI rather than individuals. “We are looking at an increasingly difficult frontier,” he said. .
Davenport, who recently authored “All In On AI: How Smart Companies Win Big With Artificial Intelligence,” said that despite the excitement and trepidation, generative AI is still in the experimental stage for companies in general, and companies are reluctant to move into production environments. It is said that there are relatively few introductions. Davenport said that while most chief data officers believe generative AI will transform their organizations, they generally “still don’t understand how to get real economic value from this technology” and question data quality. He said that hurdles remain because the use cases are uncertain.
Smart companies pursuing AI implementation do so in an intentional manner, using a process chain that goes from strategy to use cases, model development, deployment, and finally monitoring. This is a clear plan and is the opposite of “random AI actions”.
Cooperation with AI
In addition to industry sessions on healthcare, technology, and talent management, panels will include sessions dedicated to infrastructure, data, tools, applications, management, and talent, and conference organizers will discuss what AI will consist of in 2024. We took a broad approach.
In a session dedicated to the essential leadership skills of the future, Darden Professor Roshni Raveendran said humans have what she called multiple intelligences, meaning they plan and react in the ways they need. He continued to excel in his ability to combine physical, emotional, and perceptual intelligence. specific context. Such high-level capabilities continue to separate humans from AI, said Raveendran, who often studies the impact of technology on individuals and organizations.
Raveendran said he is particularly interested in the effects of enhancing or changing human capabilities when combined with new technologies.
“The idea is not that AI can replace humans, but rather that humans with AI can replace humans without AI,” Raveendran said, adding that the adoption of AI capabilities by organizations remains controversial in some areas, he added, citing the example of students returning home from the summer. Internships reported that their company prohibits the use of generative AI tools.
“That’s going to change, because if an organization has to adapt and learn, the people in that organization have to learn with AI and learn how to deploy AI rather than just shutting it down. ” Raveendran said. “It learns in collaboration with AI.”
Gabriel Adams, a professor at Darden University, said the rapid growth of AI will require changes in both the way individuals learn and teach.
“Now that we have AI as a partner, we need to be very intentional about how we reinvent pedagogy and how we actually change our psychology,” Adams said. spoke. “I don’t think we built our education system for this.”
Turn ideas into action
Although the conference was the first major event under the La Crosse Institute banner, the event built on decades of ethical leadership work at UVA and Darden.
Established in 2024 with the largest gift in Darden’s history from David LaCrosse (MBA’78) and his wife, Kathy, the LaCrosse Institute is an exciting opportunity for concepts such as business ethics and responsible leadership surrounding AI. We aim to ensure that it is incorporated into the UVA leaders know that doing so will require focus and deliberate action, and Darden Professor Yael Grushka-Cockayne has taken many of the ideas from the conference and applied them to ethical AI. We concluded the meeting with a planning session for what we called an agenda. action.
Grushka-Cockayne urged attendees to prioritize increasing their AI acumen and building their organizations’ AI capabilities in the coming years.