Grok bans expose deep AI governance gaps in SE Asia
It has been a messy few weeks for artificial (AI) regulation in our neighborhood. Indonesia, Malaysia, and the Philippines recently scrambled to slam the brakes on Grok, the generative AI tool linked to Elon Musk’s xAI. Regulators hit the panic button over deepfakes, harassment, and the spread of sexually explicit

By Francis Allan L. Angelo
By Francis Allan L. Angelo
It has been a messy few weeks for artificial (AI) regulation in our neighborhood.
Indonesia, Malaysia, and the Philippines recently scrambled to slam the brakes on Grok, the generative AI tool linked to Elon Musk’s xAI.
Regulators hit the panic button over deepfakes, harassment, and the spread of sexually explicit manipulated images.
In Malaysia, authorities blocked access entirely, citing noncompliance with safety notices.
Here in the Philippines, the Department of Information and Communications Technology (DICT) imposed a ban on January 16, 2026.
Just five days later, on January 21, they lifted it after xAI reportedly implemented tighter safeguards.
But let’s be honest about the enforcement here. It’s leaky. Tech-savvy users can—and do—sidestep these blocks with virtual private networks (VPNs) in seconds.
It raises a tough question about whether national bans actually work when the model is embedded in a global social platform like X.
This isn’t just an ASEAN headache, either.
The European Union is currently investigating X under its Digital Services Act, specifically looking at Grok’s role in spreading illegal content involving children.
According to Takanori Nishiyama, SVP Asia Pacific (APAC) & Japan Country Manager at Keeper Security, we are missing the forest for the trees.
The problem isn’t the AI itself; it’s that our rules are eating the dust of innovation.
“The recent scrutiny and restriction of Grok AI across parts of Southeast Asia isn’t a rejection of artificial intelligence itself, but an indicator that governance has been unable to keep pace with adoption. As regulators in Indonesia, Malaysia, and the Philippines assess the societal and security implications of generative AI, organizations across APAC should view this moment as both a warning and an opportunity,” Nishiyama says.
We have to stop treating these systems like passive calculators because they aren’t.
“AI systems are not passive tools. They act autonomously, process sensitive data, and increasingly interact with critical operational systems. Without clear governance, AI becomes a new class of digital identity operating at machine speed, often outside traditional security controls. This is particularly acute in APAC, where regulatory approaches vary widely. Singapore’s AI Verify framework provides one example, while Japan favours an innovation-first, soft-law model. The divergent approaches to regulating AI create uneven risk exposure across borders,” Nishiyama said.
From a cybersecurity view, the risk is what happens after the AI is deployed. Unregulated “shadow AI” creates massive audit gaps that serious enterprises simply cannot afford.
The numbers paint a stark picture of where this is heading. Gartner has warned that by 2027, 50 percent of business decisions will be augmented or automated by AI, underscoring the need for clear accountability and traceability today.
“From a cybersecurity perspective, the issue is not the model itself, but how access, identity and decision-making are governed once AI is deployed. Unregulated or “shadow AI” tools can introduce unmanaged credentials, expose sensitive datasets, and create audit gaps that are unacceptable for enterprises and public sector bodies,” Nishiyama said.
Real people are caught in the crossfire, too.
“There are tangible ramifications for end-users, too. Poorly governed AI can leak personal data, generate misleading output,s or be manipulated to perform unauthorized actions. Maintaining trust will be critical as adoption continues to grow.”
So, if bans are easily bypassed, what is the actual fix?
Nishiyama argues for control, not cancellation: “The path forward is not blanket bans, but enforceable guardrails. That means adopting identity-first security, least-privilege access and full auditability while maintaining human oversight for high-risk actions. APAC organizations that embed governance into AI deployments now will be best positioned to innovate responsibly, comply with evolving regulation, and maintain public trust.”
Singapore is trying to lead the way with its AI Verify testing framework by pushing for “responsible AI” that is practical rather than just a nice slogan.
But as the Grok episode shows, governments can move fast to block, but accountability is much harder to pin down when the risk travels across borders and through reposts.
Article Information
Comments (0)
LEAVE A REPLY
No comments yet
Be the first to share your thoughts!
Related Articles

DOE adds 178 million liters to fuel buffer
The Department of Energy said all four diesel shipments secured under the government’s Emergency Energy Security Program have arrived, adding 178,331,781 liters of diesel to the country’s fuel buffer amid continued volatility in the global oil market and developments in the Middle East. The DOE said the completed deliveries are part of the government’s fuel


