AI Agents are coming for your workflow (in a good way)

A quarter of companies using Gen AI will launch agentic AI pilots this year.
Autonomous AI Agents

Today, every company wants to be an AI company, yet only 1% consider themselves fully mature in AI adoption, according to McKinsey[1]. As we move from chatbots to autonomous AI agents, companies that haven’t already implemented AI risk losing significant ground to competitors.

This could happen faster than they think. Autonomous AI or agents “agentic systems” go beyond pre-defined scripts to handle nuanced interactions.

AI agents: a ‘true revolution’

They can not only generate content but make decisions and take action with limited or no human supervision. The move to intelligent, scalable digital labor represents a true revolution.

By 2028, Gartner[2] forecasts that 33% of enterprise software applications will include agentic AI, enabling 15% of day-to-day work decisions to be made autonomously.

This shift has significant implications for businesses: the potential for a digital labor force to work alongside humans, reducing costs and driving innovation and scalability.

For the first time, workforces can be supplemented by autonomous AI agents working around the clock boosting productivity, efficiency, and competitive advantage.

The adoption of AI agents

Deloitte[3] predicts that 25% of companies using generative AI will launch agentic AI pilots this year.

Across every industry, AI agents are making a significant impact. In customer service, they offer 24/7 support, handling a broad range of issues.

Inventory management

For inventory management, they automate tasks, optimise stock levels, and provide real-time insights.

ALSO READ: New Slack research: Daily AI use drives 81% job satisfaction

Recruitment and HR

In recruitment, they streamline the hiring process by screening resumes, scheduling interviews, and conducting initial assessments, reducing the workload on human recruiters.

By taking over repetitive tasks, AI agents allow workers to focus on high-value contributions, driving creativity, strategy, and meaningful impact.

In education

Beyond business, this technology is improving students’ academic performance by providing personalised tutoring.

In healthcare

In healthcare, AI agents reduce administrative burdens, allowing professionals to focus on complex cases and monitor patient progress, leading to better health outcomes.

Disruptions and risks

The shift to agentic AI systems brings disruptions and risks, not least around trust and data accuracy. Trusting the technology is key to integrating agents.

According to Salesforce research[4], 93% of global desk workers don’t consider AI outputs completely trustworthy for work-related tasks. Sixty percent of consumers say advances in AI make trust even more important.

To build trust, it’s crucial to ensure that AI systems use accurate and relevant data, maintain privacy, and operate within ethical and legal boundaries. This means implementing robust data governance and oversight.

AI agents must also be transparent and explainable, so users know when they are interacting with an AI and how it operates.

ALSO READ: Salesforce welcomes its first AI employee

Accountability

Clear accountability is essential to define responsibility for the agent’s performance and trusted outputs. The solution to increasing productivity and building trust is not as simple as implementing AI agents immediately, according to a new Salesforce white paper[5].

The white paper lays out key design considerations for policymakers to keep in mind outlines key considerations for designing and using AI agents, and how global policymakers can adopt and unlock AI’s full potential.

To achieve a smooth and beneficial integration, businesses, governments, non-profits, and academia must collaborate to create comprehensive guidelines and guardrails.

Continuous training programs are also key. They help AI stay up-to-date and work effectively alongside humans, enhancing productivity, and allowing employees to focus on more strategic tasks.

AI agents require oversight

Without proper oversight, autonomous AI can make decisions that conflict with human values or ethics, leading to loss of trust, legal issues, and damaged reputations.

To avoid these risks, a multistakeholder approach is essential. It’s no longer a question of whether AI agents should be integrated into workforces, but how best to optimise human and digital labor working together to reach desired goals.

Although AI agents are the latest technology breakthrough, the fundamental principles of sound AI public policy that protects people and fosters innovation remain unchanged:

  • risk-based approaches,
  • clear delineation of the different roles in the ecosystem,
  • supported by robust privacy, transparency, and safety guardrails.

By addressing these concerns, we can envision a future with new levels of productivity and prosperity, driven by a digital workforce that continuously learns and improves.

READ: Salesforce research identifies 5 AI personas shaping the future of work


References:

[1] Superagency in the workplace: Empowering people to unlock AI’s full potential. By Hannah Mayer,
Lareina Yee, Michael Chui, and Roger Roberts. McKinsey & Company. 28 January 2025.
[2] Intelligent Agents in AI Really Can Work Alone. Here’s How, by Tom Coshow. Gartner. 1 October 2024.
[3] Deloitte Global’s 2025 Predictions Report: Generative AI: Paving the Way for a transformative future in Technology, Media, and Telecommunications. By Vicktery Zimmerman. Deloitte. 19 November 2024.
[4] Despite AI enthusiasm, Workforce Index reveals workers aren’t yet unlocking its benefits. Slack. 16 October 2024.
[5] The Next Frontier in Enterprise AI: Shaping Public Policies For Trusted AI Agents. Salesforce.

Sharing is caring! 

Featured reads: