A chief AI officer is no longer enough – why your business needs a ‘magician’ too


gettyimages-2202171824

Yana Iskayeva/Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • While many companies appoint CAIOs, others aren’t so sure.
  • Insurance specialist Howden has a director of AI productivity.
  • This expert ensures collaboration and effective asset exploitation.

There’s a lot of debate about who should be responsible for ensuring the business makes the most out of generative AI. Some experts suggest the CIO should oversee this crucial role, while others believe the responsibility should lie with a chief data officer.

Beyond these existing roles, other experts champion the chief AI officer (CAIO), a newcomer to the C-suite who oversees key considerations, including governance, security, and identification of potential use cases. 

Also: 5 ways you can stop testing AI and start scaling it responsibly in 2026

ZDNET reported last year that 60% of companies already have a CAIO, and another 26% are planning to make an appointment this year.

Experts argue that the rise of the CAIO illustrates AI’s significant role in modern business. Yet not everyone believes a dedicated CAIO is necessarily the best solution to the challenges associated with implementing AI.

Last year, Kirsty Roth, chief operations and technology officer at Thomson Reuters, told me her organization doesn’t have a CAIO. “No, we’re not big on those kinds of things,” she said, suggesting the inherent role of AI in her firm’s processes meant emerging technology must be considered by all staff rather than being viewed in isolation.

However, importantly, Roth also recognized that the ever-increasing significance of AI to business operations means someone senior must ensure demands for emerging tech are managed effectively: “I guess if you have nobody playing those roles, then you probably need to think about who you’ve got or how you augment your existing managers.”

Also: 3 smart ways business leaders can build successful AI strategies – before it’s too late

That sentiment resonated with Barry Panayi, group chief data officer at insurance firm Howden, who told ZDNET that his company has created a dedicated role for a director of AI productivity, a specialist who sits between Panayi’s data organization and the IT department, creating a strong collaborative interface with the rest of the business.

So, what does this specialist do? Here are three reasons why your business needs a director of AI productivity.

1. Connecting everything

Many people across other business units are confused about the different roles of technology and data teams. When Panayi joined Howden in August last year, he decided to head off that issue at the pass.

panayi-screenshot-2026-01-29-152112

Barry Panayi: “These tools are new enough that we do need people to help with adoption.”

Howden

“The CTO and I sit next to each other, and we said very early on, let’s come up with a clear line on this, because if we don’t agree, it’s not going to be easy. And we both thought the same thing,” he said, referring to their solution to this intractable challenge.

“If you’re buying a tool and people are just using it, then tech should own and run that, because it needs to sit on their platforms and be plumbed in properly,” he said.

“There’s a bit on our data side, which is building bespoke models, like for machine learning. And then there’s a bit in the middle, where you do both, so you might use the ChatGPT API to do the LLM processing for something, and then we will write code on top.”

Also: 8 urgent updates your IT playbook needs to survive the AI era

Panayi referred to this split between IT and data as “a build versus buy decision, with a sliver in the middle.” That sliver is the meeting point where the director of AI productivity plays a crucial role.

“I think companies are missing a trick if they’ve not got someone ensuring that people are using things like Copilot and so on. These tools are new enough that we do need people to help with adoption,” he said.

“And at the moment, I don’t think we can assume the narrative is correct that people using AI at home to help them book holidays is the same as how it can help them be more productive at work.”

2. Ensuring assets are exploited

Panayi said the role of Howden’s director of AI productivity is to ensure everyone across the business uses enterprise-grade gen AI services effectively.

“I see it as a way of sweating technology and tools. Their immediate concern is making sure our enterprise is doing its utmost to use the tools that we are paying for, and Copilot, ChatGPT, and Anthropic are our main three enterprise licenses,” he said.

Also: 10 ways AI can inflict unprecedented damage in 2026

“We realize that, at this moment, Copilot is great if you’re in Office and you need AI to summarize things. Then you’ve got Claude, which, if you’re an engineer or in finance, can give you detailed information. And then there’s ChatGPT, who’s your kind of brain for hire.”

The company uses other tools and models for specific projects. Panayi described overseeing the effective use of these enterprise-grade and spot solutions as a job and a half, given that Howden employs more than 20,000 people in 55 countries.

“It’s like he’s a magician, showing people who have to deal with thousands of pages of stuff, how to get the answers they need quickly,” he said, outlining how the director of productivity highlights the benefits of gen AI to the firm’s brokers.

“These people are not at the computer all day. They are out in the market, talking and making decisions.”

Also: How to actually use AI in a small business: 10 lessons from the trenches

Panayi said the effective exploitation of emerging technology requires a nuanced approach, which the director of AI productivity provides by outlining how AI can be used safely, securely, and effectively.

“He’s got so many examples of people saying things like a task that took them a week to do can now be done in 20 minutes, with an agent that runs the operation every Monday,” he said.

3. Focusing on competitive advantage

The director of AI productivity collaborates with Howden’s IT team to ensure employees understand which tools are ready and available for work tasks.

“Getting everyone using these tools is not a data thing; it’s a tech thing. That approach takes away a ton of demand from the data team, because managing the demand for gen AI is not on my plate, thankfully,” he said.”

Also: Nervous about the job market? 5 ways to stand out in the age of AI

“I’m not out there pushing Copilot and other models — that is an IT tooling decision, which I think sounds obvious when you say it. But I’ve seen too many data teams drown because they’re trying to manage the organization’s use of Copilot.”

Panayi said the director of AI productivity’s ability to get everyone using enterprise-grade gen AI tools means his data team can work on projects that produce the most bang for the business’s buck.

“All that gen AI effort is great and powerful, and creates a lot of productivity, but I focus more on the machine learning end of AI, because I think there’s incredible power there,” he said.

In an age when many organizations and employees have access to similar off-the-shelf models and agents, Panayi said your competitive advantage comes from exploiting proprietary data and models.

Also: 3 ways anyone can start using AI at work today – safely

“We’re assessing risk, the potential impact, and deciding what the price of our products should be,” he said.

“That’s a numbers science game, so that we can give the broker scenarios to apply their knowledge. It’s about supercharging the brokers or underwriters with all new insights and ideas, and then they’ll construct the product.”





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link