Research Wrap: Balancing AI's Business Value and Ethics - Sefiani and Clarity Global report
Feature

Research Wrap: Balancing AI's Business Value and Ethics - Sefiani and Clarity Global report

"Value is for humans; values are for the good of humanity."

Sefiani, part of Clarity Global, has released an insights report looking at the value and values of AI. The report explores how brands can strike a balance between driving business value from AI technology while establishing its ethical values.

The release of the report follows a panel discussion hosted by Clarity Global in San Francisco, which featured industry leaders from Accenture, AI21 Labs, Helm.ai, and Venture Beat.

AI's Value
Pankai Dugar, Senior Vice President and General Manager, North America from AI21 Labs defined AI's value by stating that successful enterprise AI adoption required a focus on driving efficiency and personalisation, while ensuring users understand the purpose of using certain tech: "Measurable impact should underlie everything: efficiency’s impact is ROI; personalization’s impact is adoption; and reasoning’s impact is trust. Exploration, is the outlier because it has a different meaning today than it did in 2020."

Ginsa Joseph from VentureBeat added that AI's impact lies in its ethical positivity and not just its commercial benefits. Enterprises have to think of ways to drive positive and transformative outcomes with the adoption and usage of AI. It needs to be able to solve existing problems to drive this positive impact.

Enterprise reinvention through AI
Despite the hype, the report said AI adoption has remained steady at large organisations over the last few years. 42 per cent of enterprise IT leaders have already actively deployed AI, but 40 per cent continue to just ‘actively explore’ it. Benny Du, Global Cloud First Data & AI Senior Principal - Ecosystem Lead for GenAI and LLM at Accenture noted that this is due to the misconceptions and brand reputation concerns AI has gained from issues such as class action lawsuits on AI generated content.

The report said enterprises need to realise that AI tech can help them make sense of the mountain of data they inevitably have stored within their four walls (and clouds). In marketing and communications, it’s acknowledged that a data-driven approach is crucial in today’s landscape, but accessing and strategically using this data remains a challenge, with 87 per cent of marketers saying data is their company’s most under-utilised asset.

The report highlighted another concern as to why enterprises are not adopting AI technology: the fear that it will replace many human jobs, saying that although automation will displace 85 million jobs by 2025, it will also create 97 million new jobs. Regulators and organisations need to consider humans and AI as mutually benefiting each other, and build adoption strategies accordingly.

Vanessa Camones, Head of Marketing at Helm.ai said, "...we need to ensure it (AI technology) retains a human-centred point of view...it’s a balancing act between achieving commercial benefit and cementing the importance of ‘human-by-design’."

AI's Values - ethics and AI policy
AI technology needs to be adopted thoughtfully and influenced purposefully, according to the report, It needs to be solving a problem, and humanity's role is essential to creating the best possible outcome. This discussion led to the question: "How do we build values around AI and whose values should serve as the blueprint?"

Although AI regulation is being discussed within and between governments globally, it was uncovered that only 10 per cent of organisations have a formal, comprehensive generative AI policy in place, and one in four say not only is there no policy, but there’s no plan to create one. The survey also indicated that 41 per cent of respondents working in audit, risk, security, data privacy, and IT governance said not enough attention is being paid to ethical standards for AI implementation.

The report pointed out a potential a 'wait and see' approach happening when it comes to AI policies, but with the speed of AI development and adoption, not self-policing in any way could be much more costly than waiting.

Vanessa Camones explained: "...businesses are taking stock and figuring out their internal policies before they engage with and implement AI technology. They’re also waiting to see what regulators say so they don’t have to go back on adoption once legislation is brought in."

AI and diversity
To create ethical AI, the data that it is trained on needs to be diverse, reflecting different languages, experiences and views. The report suggested that this isn't easy, as the data used to train generative AI, such as search engines, are biased and lack representation, thus different sources need to be identified and used as much as possible. Diverse data also has to be supported by diverse teams, as it is recognised as a driver of higher ROI. Companies with exec teams in the top quartile for gender diversity were 25 per cent more likely to have above-average profitability than companies in the fourth quartile, according to the report.

Gina Joseph, VentureBeat's Chief Strategy Officer, said: "We need to ensure we have diverse teams building, testing, and using it, because this will create a more fair and ethical output. One of my Harvard Business School professors taught me about ‘epistemological fragmentation’ - which basically means as individuals, we don’t know what we don’t know."

The report pointed out that a lot of press coverage around AI technology’s growth focuses towards the negative, but it’s also important to reflect on how, when developed ethically, considering the impact on all end-users, it can support diversity and accessibility.

Consumer education must be an AI development priority
Another way AI can be developed with ethics and value, is through consumer education. A lack of understanding is preventing effective debate, and not understanding AI means many sway instantly towards rejection and fear. The report suggested that comms and marketing leaders at AI companies, and businesses adopting AI need to take responsibility for providing their audiences with transparent, consumable education.

You can access the full report here.

More stories


Telum Media

Database

Get in touch to hear more

Request demo

Telum Media

Alerts

Regular email alerts featuring the latest news and moves from the media industry across Asia Pacific Enjoy exclusive daily interviews with senior journalists and PRs as well as in-house editorial and features from the Telum team

Subscribe for alerts