Perhaps it’s no surprise that chief AI officers are rapidly becoming a common sight among healthcare industry executives. Their job descriptions and the skills they must develop to succeed in this new role are as complex as the artificial intelligence and machine learning technologies they oversee.
Some provider organizations are hiring people with deep machine learning and data science experience to become CAIOs, or bolting those letters onto existing IT titles. But that may not be the best approach. Do these tech experts know healthcare rules? Business strategy? Governance?
this is the first article New series in Healthcare IT News: Chief AI Officers in Healthcare.
Today we speak with Dennis Chornenky, Chief Advisor on Artificial Intelligence at UC Davis Health and CEO of Domelabs AI, which provides AI governance advice and systems to the healthcare and national security sectors and manages the Valid AI program. I will ask you.
Chornenky is an executive with more than 20 years of leadership and business strategy experience at the intersection of healthcare and advanced technology, with a focus on AI strategy and governance. He has held senior positions at the White House, UnitedHealth Group, and Morgan Stanley.
The Chief AI Advisor at UC Davis Health is the equivalent of the current Chief AI Officer in the health system. Mr. Chornenky has joined the company full-time as an executive, but has become an “advisor” because he did not want to take on the role full-time. He primarily works with CEOs, CIOs, and Chief Strategy Officers.
Here, Chornenke talks about what UC Davis Health was all about. What we’re looking for in our first Chief AI Officer, who that executive will report to, why his background makes him a good fit for the role, what his day-to-day job looks like, and Possess skills that other aspiring executives should aspire to
Q. How did UC Davis Health approach you to become Chief AI Officer? What were they looking for? Who would you report to?
A. Domelabs AI had good relationships with some people across UC and with UC Davis Health. And they were at a stage where their understanding of AI governance and the need for it was maturing.
They had put together a pretty good analytical oversight process as part of a broader health data oversight committee that had been previously mandated by the University of California’s Office of the President. So they expanded it into the realm of analysis. And while we continue to expand our governance processes, we aim to accelerate our ability to securely deploy AI technologies more efficiently, and perhaps more quickly, without sacrificing such issues. Ta.
There was also some interest in the possibility of building collaborations with health systems and academic medical centers designed to foster the responsible adoption of generative AI technologies and AI governance best practices. And that eventually became what we now call Valid AI, which was launched last year.
The idea was to have a full-time Chief AI Officer to support these efforts. I just finished a full-time role in a large organization and wanted to do something a little more independently and start building a team and business that would provide this type of service to meet these types of needs. I was thinking. I just described it, but probably other organizations’ broader health systems as well.
So the University of California ended up submitting an RFP that Domelabs AI applied for, which thankfully passed. It was a blind review. Several other people also applied. So we’ve been supporting Since then, several initiatives across UC Davis Health and the University of California have been widely implemented as well.
Q. This is your second post as chief AI officer. What background makes you a good fit to be a Chief AI Officer, and what skills should someone looking to become a Chief AI Officer have?
A. This is a great question and I see it being debated quite a bit today. Organizations are increasingly thinking about this role, hiring talent, and resourcing these roles and offices. For me, it’s a combination of things.
My background and interests just happened to be a very natural fit for this role as it developed. I have long had a strong interest in technology policy, AI policy, and regulation. Complex questions about fairness and bias in AI, mathematical trade-offs, and how to communicate these in understandable ways to business and policy leaders.
I also have a strong interest in advanced technology, machine learning, AI, and data science. I have spent a lot of time in academic environments, working and researching, and interacting with industry on many projects in this field. I’ve also spent a lot of time on business strategy. I previously had a career in the financial industry. I have held various roles as an asset manager and investment banker at several large investment banks.
I’ve launched several startups, so I’ve been involved in innovation and business. We feel that all of these areas of experience are actually very important, and when combined with yet another area, provide the expertise of a Chief AI Officer. I have spent a lot of time and work in the medical field with a focus on medical information technology.
Having expertise in the healthcare field made a big difference.
So when I left the White House and finished my work as a senior advisor and presidential innovation fellow, working on AI policy and pandemic response, I was also training as an epidemiologist and gaining experience in telemedicine. . But as I was coming to the end of that role, I had an opportunity to work with United Health Group. Until now, United Health Group didn’t have an AI person.
So this role was created for the first time. We initially had some conversations about what would be a good fit for some of the work I could do, and naturally I came to understand that this was an important role.
I started the job helping to set up a large governance structure and managing many patient AI portfolios and their clinical and business environments. This combination of skill sets in AI is probably relatively rare at this point.
Many organizations are taking someone with extensive experience in machine learning or data science, perhaps a PhD in that field, and appointing them as chief AI executives. This may be a bit of a misnomer, as artificial intelligence is a multidimensional problem that actually covers many other areas, including a rapidly expanding regulatory environment.
It is critical that the Chief AI Officer has a strong sense of: What does the regulatory environment for AI policy look like, how is it evolving, and what implications does it have for organizations? To summarize the skill set, AI policy regulation is one of the very important ones. Business strategy is also important.
Therefore, many of these more complex concepts can be translated into organizational strategy to ensure that AI investments fit into the broader organizational mission and strategy. Understanding technology is important. You don’t need to have a PhD in data science, but to make sure you’re thinking correctly about the capabilities your organization wants to pursue, be aware of what these technologies can and can’t do. is equally important.
The fourth area, as I mentioned earlier, is domain expertise. Really understand your domain and how AI intersects with all the different aspects of that domain. Whether you’re in healthcare, government, or finance, I think it’s important to get as many people with those capabilities as possible, especially in regulated areas.
As an example of the regulatory environment, President Biden signed an executive order on AI last October, followed by additional guidance from the Office of Management and Budget, as is typically done after an executive order. , which now requires all federal agencies to maintain an AI inventory, establish an AI governance committee, and have a chief AI officer.
So what a lot of federal agencies have done is, if they’ve had a hard time understanding what this new role looks like, they’ve moved their existing talent into senior technical positions like chief data officer or chief data officer. This means that they will be placed in the position. The title AI has been added to the head of technology or chief information officer.
So now someone becomes the chief technology officer and the head of AI. I think this is a good step, at least for the time being. Because what many of these agencies have done is open up roles to full-time, standalone chief AI officers who are currently being interviewed. I think the role is evolving in terms of how it’s defined and how organizations are thinking about it. But I think it’s a very multifaceted role and it’s really important for organizations to keep that in mind.
Q. Please describe the AI part of your work at UC Davis Health. Broadly speaking, what is expected of you? More specifically, what is a typical day for you?
A. Organizations have approached this role a little differently here and there. A lot of it depends on what your organization already has in place. As I mentioned earlier, UC Davis Health already had a really good infrastructure for AI surveillance and some really smart people working on those topics. While others may be less mature in the field, the chief AI officer ultimately does a lot in terms of building even the most basic foundations of AI governance and AI oversight. It might happen.
UC Davis Health already had a great process in place that was pretty robust. We ended up focusing on some of the strategic aspects. we We built an AI strategy.
We expanded our AI roadmap. This helps organizations identify the areas they want to target their investments and the types of AI capabilities they want to build over progressive time periods, such as 12, 18, or 36 months. The time period your organization would like to consider. I also think a lot about education in different areas of the organization.
Since I took on this role, I’ve received a lot of requests from different groups from different disciplines, whether it’s clinical, emergency department, cardiology, oncology, or just wanting to learn more about AI. .
So I spend a lot of time giving presentations and calling with leaders of these organizations to not only make them aware of our company-wide efforts, but also to give them an educational perspective. and perhaps even helped provide some suggestions and guidance on how the organization could be structured. Create your own mini AI implementation roadmap specific to your department.
How they think about developing those features, whether they build something in-house on their own, whether they have the capabilities and resources, or what kind of vendors are similar to ours. Talk about what you think. About it on a larger level.
And also on the data side of legal compliance, talking to a lot of people who are really interested in understanding the intersection of AI and compliance, the intersection of AI and data, data stewardship, data governance. is completed.
How can you evolve your data processes to make your data more AI-ready and ensure appropriate, diverse, fair, and representative datasets for use in AI applications? This is critical to ensuring data security and fairness. How do we provide medical care?
For valuable bonus content not included in this article, click here to watch the HIMSS TV video of this interview. Part 2 of this interview will be posted tomorrow.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.