Next in AI: Will 2025 see innovation and pragmatism coexist?

As businesses gear up for a different outlook on AI in 2025, what are some new trends they anticipate? Let’s take a quick look at what tech leaders foretell for 2025.
Saachi Gupta Ghosh
  • Published On Jan 9, 2025 at 05:00 AM IST
Read by: 100 Industry Professionals
Reader Image Read by 100 Industry Professionals
The artificial intelligence (AI) landscape is already evolving at breakneck speed. As we enter 2025, companies across industries are shifting from experimentation to tangible, value-driven AI initiatives. With AI already playing a significant role in cybersecurity, business operations, and even the development of new applications, experts predict that the next phase of AI will be marked by greater adoption, pragmatic strategies, and a focus on tangible results.

All eyes are on what’s to come this year — the rise of intelligent agents, the shift from hype to measurable values, the importance of custom AI models, and the role of governance. As businesses gear up for a different outlook on AI in 2025, what are some new trends they anticipate?

Advt
Let’s take a quick look at what tech leaders foretell for 2025.

Data security will be at the heart of GenAI adoption

2024 saw organisations across the APAC region embedding AI into their business processes, particularly in cybersecurity. As we look towards 2025, one critical element stands out in the discourse around the adoption and evolution of generative artificial intelligence (AI): data security.

In a conversation, Arvind Nithrakashyap, Co-Founder and Chief Technology Officer of Rubrik, rightly said, “As generative AI models require vast amounts of data to learn and generate content, ensuring this data's privacy, confidentiality, and integrity becomes paramount. Companies that can offer robust data security measures will gain a competitive edge, fostering greater trust among users and partners.

As businesses and consumers alike demand more from AI in terms of capability and security, generative AI's future looks increasingly intertwined with advancements in data protection. “By 2025, we predict that data security will not only be a benchmark for success in the AI industry but a deciding factor for trust and broad-scale AI adoption by industry and consumers,” Arvind mentions.

Coming to security, a recent PwC report highlighted a worrying statistic, that more than 40% of leaders say they do not understand the cyber risks posed by emerging technologies like GenAI. This leaves many organisations vulnerable to evolving threats. Therefore, while AI will become central to cybersecurity strategy for organisations in 2025, they will also crucially seek to secure their own AI models.

Advt
The hype around GenAI will wane, and businesses will take a pragmatic approach to AI

With the rise of ChatGPT and other GenAI tools, AI’s hype has been more palpable than usual for the past two years. We spoke to Remus Lim, Senior Vice President for Asia Pacific and Japan at Cloudera, who noted that in 2025, businesses will start to shift from idealistic expectations to pragmatic applications of AI.

Remus shared, “2025 will see two camps emerging – the first includes businesses that have found successful use cases for GenAI and are reaping the fruits. According to McKinsey, 65% of organisations report regular use of GenAI, with meaningful cost reductions in HR and revenue increases in supply chain management.”

"The hype around GenAI will wane, and businesses will take a more pragmatic approach to AI," Remus said. However, not all industries will benefit from GenAI’s capabilities equally. He added, “The real value of GenAI lies in gaining knowledge and insights at scale – without good data, AI models will not be able to run successfully. Thus, businesses that are most likely to benefit are from sectors with large pools of trusted data that they can tap into for actionable insights.”

The second group of companies do not traditionally have large-scale databases to scale nor benefit as much from GenAI, and they will turn to traditional AI or deterministic ML models instead to drive efficiency and productivity.

Ultimately, Remus foresees that businesses will cease buying into the hype and shine of Gen AI, and instead focus on mapping their technology investment roadmap to their broader organisation goals.

In line with the same thoughts, Rita Kozlov, VP, Product Management at Cloudflare, also emphasised that 2025 will be the year of AI pragmatism. “After a period of experimentation, organisations will now be more value-conscious with their AI spend. Organisations will be more scientific and methodical in how they approach putting AI in front of customers, evaluating different approaches and options for different use cases. We’re seeing teams pivot to predictable pricing models, transparent gateway metrics, and smaller models that do the job, rather than the largest, most expensive LLMs.”

Businesses will favour private and customised LLMs over public LLMs

With enterprise AI innovation taking centrestage, businesses will eschew public large language models (LLMs) in favour of enterprise-grade or private LLMs that can deliver accurate insights informed by the organisational context. According to a McKinsey study, 47% of companies are significantly customising and developing their own models currently.

“More companies will run customised AI models on-premise,” said Emilio Salvador, VP of Strategy and Developer Relations at GitLab in another conversation. In 2025, we will see a shift toward on-premise AI deployments. As open-source models become more cost-effective and accessible, organisations will increasingly opt to run customised versions within their own data centres. As a result, it will be cheaper, faster, and easier to own AI models and fine-tune them to individual needs.

Emilio further added, “Companies will find they can combine their data with existing models and tailor the experience for their customers at a fraction of today’s costs. Meanwhile, increased compliance risks associated with AI will drive regulated industries, like financial institutions and government agencies, to deploy models in air-gapped environments for greater control over data privacy and security and reduced latency.”

Intelligent and adaptive AI agents will surpass the limitations of traditional software

As per Emilio, the future of applications are intelligent, adaptive AI agents. A big trend ahead includes the changing future of applications, which is shifting toward intelligent, adaptable AI agents that surpass the limitations of traditional software. He predicted, “Rather than interacting with fixed interfaces and preset workflows, users will engage with AI agents that respond intuitively and learn over time. These AI agents will serve as the application itself, providing a more interactive and conversational experience. They will perform tasks, offer guidance, and learn from interactions in real-time. This change will lead to significantly more personalised and responsive applications, fundamentally reshaping how we use software.”

AI usage must broaden and building AI trust and governance will be paramount

Steven Seah, Vice President for ASEAN, India & Korea at Informatica,
also thinks that as vertical AI models are scaling, AI usage must broaden. They utilise enterprise data and are tailored to specific industries, thereby ensuring a quicker return on investment. He said, “While there are multiple use cases among major sectors, current exploration is often limited to reducing operational costs and improving productivity. Broadening the usage of AI to optimising supply chain management, combating fraud and risks, and improving CX through hyper-personalisation, will drive significant value for organisations looking to innovate and drive competitive advantage.”

“The need for AI oversight and accountability is important as organisations place emphasis on data democratisation, data stewardship and AI data pipelines as topmost important use cases. Organisations will need a strong data governance framework to manage their AI systems and ensure AI is transparent and used responsibly, while complying with AI regulations and policies,” Steven mulled.

As AI upskilling becomes imperative, AI-powered Copilots will bridge the existing security talent gap

As per the World Economic Forum’s recent data, the global cybersecurity talent gap is projected to reach 85 million workers by 2030, and organisations will face a critical shortage of cybersecurity professionals. The issue is especially acute in Asia Pacific, according to Boston Consulting Group’s findings, as the region accounts for over half of the global cybersecurity talent gap. Deliberating on facing these challenges head on, Gareth Cox, Vice President of Asia Pacific & Japan at Exabeam, emphasised on the introduction of AI copilots as a huge opportunity for organisations to bridge the talent gap in two ways.

Gareth explained them and stated, “Firstly, AI-powered copilots can automate routine work, freeing security professionals from manual, time-consuming tasks to deliver strategic impact. Secondly, AI-powered copilots can empower analysts with accessible insights to handle more complex tasks. Not only does this minimise training time needed for new analysts, it can also make security roles that previously required significant on-the-job experience and training more accessible to the talent pool.”

Developing a standard AI operating environment will be crucial

There are multiple options available today to run GenAI and other types of AI workloads. Over time, we can expect enterprises to create a standard operating environment for AI use cases. According to Vishal Ghariwala, Senior Director and Chief Technology Officer at SUSE, “A standard AI operating environment ensures consistent governance, improved security, and reduced cost across a hybrid and multi-cloud IT estate.”

Vishal accentuated the need for optimising resource usage, thereby contributing to reduced CO2 emissions. “Such an operating environment will typically comprise a flexible and open AI platform that is highly scalable and provides common modules and services required by AI workloads such as a curated set of LLMs, data privacy and security, observability, and so on,” he concluded.

Multi-agent architecture is going to become the de facto reality

As intelligence becomes more sophisticated, both by default and design, AI agents will start working and competing with one another to undertake complex workflows. And unlike people, they’re not going to get sick, or tired. We spoke to Nina Schick, AI Council Member, Qlik, who deliberated that AI impact in 2025 will entail three major themes; a new crisis in data authenticity, applied value of AI, and autonomous agents with business intelligence.

She stated, “For businesses that's tremendously exciting. It won’t happen next year, but by 2030, multi-agent architectures won't be revolutionary; it'll be ordinary. Businesses, from Fortune 500 giants to two-person startups, will harness this intelligence at their fingertips.”

Multi-agent architectures are arriving – just as there are competing cloud environments and AI foundation models, Nina expects to see multiple agentic architectures co-existing. Interoperability and avoiding vendor lock-in will be critical to realising the full potential of agentic reach and value.

“Some agents will be good at data integration, others at schema cleaning, text-to-SQL generation, automation, or building dashboards. Over time, these agents will learn to interact with one another. But humans must stay in the loop, or at least ‘over the loop’, for surveillance and governance,” Nina explained.
  • Published On Jan 9, 2025 at 05:00 AM IST
Be the first one to comment.
Comment Now

Join the largest community of IT industry professionals in Southeast Asia

Subscribe to our newsletter to get latest insights & analysis.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.