Straits Interactive warns firms: AI tools may expose data
Following controversy involving design platform Figma, Straits Interactive has issued a warning to corporate users: confidential business data may already be training external artificial intelligence (AI) models through free, freemium, or embedded AI features in workplace tools. The alert comes after Figma faced a class-action lawsuit in the United States over allegations that users’ design

By Staff Writer
Following controversy involving design platform Figma, Straits Interactive has issued a warning to corporate users: confidential business data may already be training external artificial intelligence (AI) models through free, freemium, or embedded AI features in workplace tools.
The alert comes after Figma faced a class-action lawsuit in the United States over allegations that users’ design files—including corporate prototypes, product roadmaps, and intellectual property—were used to train AI features without clearly obtaining consent.
While Figma disputes parts of the claim, the case has triggered industry-wide concern. At issue is whether corporate users understand that data entered into AI-enhanced tools—especially free or trial-based ones—may be reused for training purposes. Many privacy policies include vague clauses such as:
• “Your data may be used to improve our services”
• “We may use user content to train our models”
• “We collect and process usage data for product enhancement”
For companies managing sensitive data like financial reports, client records, or proprietary designs, such language poses serious legal and operational risks.
“Many corporate users assume their data remains private simply because they’re using a trustworthy brand or because the AI feature appears as a ‘free add-on’. That assumption is dangerous,” said Kevin Shepherdson, CEO of Straits Interactive and co-author of The AI Factory: An AI Capability Guide for SMEs. “The reality is this: If your employees are using free AI tools or apps that have new untested AI features, your confidential data may already be training someone else’s model.”
He added, “This is not a technical glitch — it is a governance oversight. Organisations must stop treating AI tools like ordinary software. Every AI interaction is a data-sharing event with potential long-term consequences.”
Straits Interactive outlined five key governance risks that companies often overlook when staff use unsanctioned or poorly understood AI tools:
- Invisible data flows— AI features in common tools may log inputs for training.
- Misleading “free” apps— Freemium models often monetize user data.
- Unapproved data use— Employees may paste confidential information into AI tools.
- Ambiguous privacy terms— Broad terms may hide consent for model training.
- Bypassing IT/legal oversight— AI tools can evade traditional procurement safeguards.
“These failures expose organisations to breaches, loss of trade secrets, regulatory penalties, or reputational harm,” the firm warned.
To mitigate the risks, Straits Interactive urged AI tool developers to improve transparency and user control. Key recommendations include avoiding dark patterns, using opt-in data training mechanisms, and offering enterprise-level data isolation. But the group cautioned that corporate users themselves must take initiative.
Companies should immediately adopt clear AI usage policies, prohibit unauthorized tools, assess each AI tool’s data-handling practices, and disable data-sharing settings when available. Staff education is also essential to ensure compliance and awareness of risks.
Recommended frameworks include ISO 42001, ISO 5338, and Singapore’s Model AI Governance Framework.
Straits Interactive continues to offer regionally recognized training in Responsible AI Governance, Generative AI App Design and Prompt Engineering, and the IAPP AI Governance Professional Certification.
“Companies must recognise that every prompt, every upload, and every autocomplete suggestion is a data-sharing event with long-term consequences,” said Harish Pillay, Adviser on Gen AI and AI Governance at Straits Interactive. “AI is no longer just a technical issue; it is a leadership imperative. The Figma incident highlights the critical need for clear policies, ongoing staff education, and enterprise controls to prevent inadvertent leakage of IP and sensitive data. As AI agents, plugins, and workflows become further embedded across workplace tools, these risks will only increase.”
Article Information
Comments (0)
LEAVE A REPLY
No comments yet
Be the first to share your thoughts!
Related Articles

Semirara Q1 profit falls on weaker power output
MANILA — Semirara Mining and Power Corp. said its first-quarter net income fell 12 percent to PHP 3.8 billion from PHP 4.4 billion a year earlier, as weaker power generation and lower coal shipments weighed on earnings. The Consunji-led integrated energy company said revenue for January to March declined 7 percent to PHP 15.43 billion


