IBM Experts Reveal How AI’s Next Leap Is Changing Cybersecurity and the Future of Work

IBM experts explain AI’s shift from content creation to autonomous tasks and stress the ongoing challenges in cybersecurity and managing AI errors. AI will reshape jobs by automating routine tasks, while humans focus on creativity and complex skills.

Published on: Sep 08, 2025
IBM Experts Reveal How AI’s Next Leap Is Changing Cybersecurity and the Future of Work

IBM Experts Break Down AI’s Progress and Cybersecurity Challenges

September 7, 2025, 3:33 pm IDT

Artificial intelligence continues to evolve, opening new possibilities while also raising complex questions. IBM’s Martin Keen and Jeff Crume recently shared clear insights on what AI development really means today and why cybersecurity remains a tough nut to crack.

From Generative AI to Agentic AI: What’s the Difference?

Martin Keen, an IBM Master Inventor, explained the leap from generative AI to agentic AI. Generative AI produces content—like text, images, or code—based on prompts. Agentic AI, however, takes autonomy a step further. It plans and completes multi-step tasks on its own, adapting as needed until a goal is reached.

"It can trigger its own next steps, adapt to changing contexts, and keep going until it finally meets that goal," Keen said. This means AI is moving beyond just creating content to actively executing tasks like autonomous incident response or complex robotic automation. This shift demands more trust and smarter oversight.

Clearing Up Misconceptions About the Dark Web

Jeff Crume, an IBM Distinguished Engineer, addressed myths around the “Dark Web.” He clarified that “dark” doesn’t mean illegal content—it refers to parts of the internet that are hidden and unindexed. Blocking the Dark Web isn’t practical because:

  • It makes up less than 2% of the internet.
  • Jurisdictional issues make enforcement difficult across borders.
  • It supports important uses, like free speech in oppressive regimes and hacker activity monitoring for research.

Crume described efforts to block it as “a bit of a game of whack-a-mole.”

Understanding AI Hallucinations

Another issue Keen discussed is AI “hallucinations.” These happen when AI confidently states false information, but without intent to deceive. Large language models (LLMs) are basically prediction machines, designed to guess the most likely next word or token, not verify facts.

They often fill gaps with plausible but incorrect details, especially on recent or niche topics. Techniques like Retrieval Augmented Generation (RAG), which pulls data from external sources, help reduce errors. Still, Keen emphasized the need for “human in the loop validation” to check AI outputs for accuracy.

AI’s Impact on Jobs: Transformation, Not Replacement

Looking forward, the experts suggested AI will mostly change jobs rather than eliminate them outright. Similar to how ATMs changed banking work without removing jobs, AI will automate routine, rule-based, or low-judgment tasks.

Jobs requiring creativity, empathy, complex reasoning, and physical skills will stay human-centered. This means ongoing learning and upskilling are key to using AI tools effectively, freeing people to focus on higher-value work that machines can’t easily do.

For professionals interested in practical AI training and staying ahead in this evolving landscape, exploring targeted courses can provide valuable skills and insights. Visit Complete AI Training’s latest courses to find resources tailored for IT and development roles.