[location-weather id="7878"]

How businesses can navigate an AI-saturated 2025 

3 February, 2025 | 
Pureprofile 

Originally published: Kochies Business Builder, 30 January 2025

By Martin Filz, CEO

A year ago ChatGPT reached 100 million monthly active users just two months after its public release, making it the fastest-growing consumer application in history according to a UBS study. Recently, that’s quadrupled to more than 100 million weekly users.

Meanwhile, Nvidia is the hottest stock on the market, recently beating out Apple as the most valuable company in the world. Except for gaming and graphics, Nvidia isn’t typically a consumer-facing brand. However, the increasingly ubiquitous nature of AI has fuelled demand for its specialised AI chips and skyrocketed its popularity.

Since GenAI went mainstream in 2023, many organisations have understandably still maintained a healthy scepticism about it with regard to privacy, data sharing and the reliability of its output. But with Australia ranking fourth globally in GenAI usage and maturity, the stigma around businesses harnessing the technology is dissipating. Leaders just have to be smart about how they are navigating these AI-saturated waters.

In 2024, the Australian government started to make headway on local AI regulation. A bipartisan committee recently released a recommendations report that deemed ChatGPT and Meta and Google’s generative AI products as “high risk” and urged the government to develop a scheme to ensure fair remuneration when creative work is used by AI tools.

While governments across the world should be contemplating AI guardrails, any guidance is unlikely to directly address how you, as a business leader, should wade through the murky waters of AI tools. More likely, the regulation will address the use of consumer data and AI training and protection for artists, as it should.

Moreover, GenAI will be notoriously difficult to effectively regulate. Data and AI laws are country-specific but AI is cross-border. It’s hard to have local regulation for data that has come from multiple jurisdictions because whom do you make accountable in situations of dispute? The consumer whose data is being used or the AI company that’s collating this information? These are tricky questions that won’t have a smooth solution.

While eventual government regulation (which is most likely due out in 2025) can be a gentle guide, it is up to you as a leader to vet the AI solutions you bring into your business.

Level up company ethics and rigour 

With AI hallucinations abounding, brand trust has never been more important. Customers and clients are relying on every company to have done their due diligence when it comes to how AI is employed within an organisation.

In my own experience in the data and insights industry, rigour around AI tools is extremely important, from the veracity and efficacy of the tool itself to the ethical implications of the tool sourcing data externally or from your own databases. You cannot afford to leave these stones unturned because, ultimately, it is your company’s reputation that is on the line.

So how do you vet these tools? Set up sub-committees or teams within your companies to do so and listen to their recommendations. Look to your industry bodies for guidance and speak to your competitors, as they are likely to be grappling with the exact same questions as you. Now is a time to collaborate for the good of the sector overall. We can all return to fierce competition tomorrow.

Understand the consequences of poor data quality

From a much wider-than-expected Republican win to Jaguar’s new ad campaign that missed the mark (and the car), bad data can lead to disastrous results. And this extends to AI.

The impact of AI is only as good as the data it’s built on. Using inaccurate, incomplete or biased data can lead to disastrous decisions, from misguided strategies to harmful customer experiences that lose trust and loyalty.

Business leaders must ensure that data quality is meticulously validated and maintained, as poor training data can lead to financial loss and reputational damage. In a world where AI can drive business outcomes, a single data error can spiral into a major setback.

The rise of GenAI brings immense opportunities but also significant challenges. To succeed, leaders must focus on three key areas: ethical governance, rigorous tool and data vetting and collaboration with industry peers. While government regulation will provide a framework, individual leaders must take responsibility for building trust in their organisations through transparency and accountability. Mistakes can have lasting consequences. Responsible and proactive conduct will position businesses as leaders in this revolution.

Read the full article >

Continue Exploring

More data, insights and media from the experts

SIGN UP NOW

Stay up to date with our latest news, insights and trends reports

SIGN UP NOW

Stay up to date with our latest news, insights and trends reports