Dr Bhaskar Dasgupta

Chairman of the Board & Non‑ Executive Director, Apex UAE, Bahrain, Israel & India Boards, demystifies the usage of AI implementation at board levels

What is the single most critical practical tool or approach a board director needs to master to effectively leverage AI today, and why?

You don’t need to be a prompt engineer or get into GPT token math, but you do need to master the art of structured AI questioning. The most critical tool today? A working understanding of how to interrogate AI systems intelligently. Not code. Not dashboards. Curiosity, translated into good prompts. Back in 1995, when I was first playing around with rule-based expert systems and neural network in trading and banking, the bottleneck wasn’t the tech—it was how poorly the humans asked questions. Fast forward to today, and LLMs are astonishing, but they still spit out garbage if you feed them garbage. So as a board director, you need to develop a “Socratic AI Muscle” to challenge outputs, triangulate risk, and imagine what questions your CEO isn’t asking. Because AI doesn’t hallucinate on its own. It’s usually following human delusion faithfully. It is also vital that you use multiple AI tools, no single tool is right.

 

What is the biggest ethical challenge boards currently face with AI adoption, and how can robust governance address it?

The biggest ethical challenge isn’t privacy, or hallucinations, or even bias. It’s delegated accountability. Boards are too often seduced into thinking, “Well, the algorithm decided,” as if abdicating decision-making absolves you of liability. It doesn’t. I’ve seen this play out since the early neural net pilots in the 2000s. When models go wrong—especially in credit, recruitment, or surveillance—you can’t just wave your hands and say, “It was the AI.” The black box needs to be unpackable and auditable. Robust governance means boards must demand two things: line-of-sight explainability (how did the system reach its recommendation?) and ownership maps (who is accountable for monitoring, auditing, and intervening?). Governance doesn’t mean slowing down AI. It means ensuring your AI has a soul—or at least a conscience handler.

 

What is the primary barrier preventing more boards from confidently adopting and innovating with AI, and what leadership approach can overcome it?

Fear wrapped in jargon. That’s the barrier. Many boards are still trying to recover from their blockchain PTSD, and now AI has arrived with even more acronyms, consultants, and hype cycles. The tech is moving fast, but board confidence is stuck buffering. The leadership approach to fix this? Mandatory AI literacy at board level. And I don’t mean a 30-minute EY slideshow. I mean getting your hands dirty. I’ve chaired sessions where we forced directors to use AI in live simulations—compliance summaries, IRR calculations, even drafting ESG disclosures. When they saw what it could do—and what it couldn’t—they moved from fear to fluency. AI isn’t a black box. It’s a mirror. Boards need to look into it. Look at what the Norwegian Sovereign Wealth Fund did. It enforced usage of AI for ALL staff including directors. People who were not using AI were told that they will not be promoted. It needs that level of sponsorship and push.

 

Where do you see the most significant missed opportunities for boards in utilizing AI to enhance their oversight, strategy, or decision-making?

Boards are sleeping on AI-enhanced foresight. Too many are using AI as a glorified intern—summarising papers, generating minutes, pulling sentiment analysis. Useful, but hardly strategic. Where the real missed opportunity lies is in using AI to test strategy. I’ve been involved in projects where we used agent-based modeling and AI to simulate geopolitical risk, capital raising outcomes, even tokenisation scenarios for new funds. Directors could rehearse futures, not just review spreadsheets. I am using AI to help with gaming on cybersecurity issues for a digital investment manager board that I am chairing. The oversight piece also needs a revamp. AI can monitor regulatory changes across jurisdictions in real time and flag compliance drift long before the lawyers call. Yet most boards are still waiting for the quarterly binder to be dropped off like it’s 2002.

 

How do you anticipate the AI regulatory landscape will evolve over the next 12–18 months, and what proactive steps should boards be taking now?

Regulation will bifurcate—fast and furious in some jurisdictions, philosophical and plodding in others. The EU will over-regulate. The US will waffle but will do extraordinary things. China is moving very rapidly in the industrialization of AI. The UAE, HK, and Singapore will take the opportunity to become the sandbox sovereigns of AI innovation, just like they did with crypto. We are so lucky to be in the UAE.

 

Three things boards should do…

1. Mandate an AI audit trail—start now. Regulators will want explainability, traceability, and risk logs.
2. Push your management teams to join AI standards initiatives—don’t wait for ISO or IOSCO to knock.
3. Build AI use-case registers—like your risk register, but for AI. What tools are being used, by whom, for what, and with what guardrails.