,

The Verglas Threat of AI


Hazards businesses might miss before it’s too late.

TL;DR The meteoric rise of generative AI has often been described as a “gold rush,” but for many modern enterprises, AI is a “black ice” risk – transparent, hard to detect, and capable of sending a company into a reputational skid before they realise they’ve lost traction. While tools like Salesforce’s Einstein Trust Layer offer “traction control” by masking data and enforcing zero-retention policies, technical safeguards are not a cure-all.

The danger lies in Shadow AI, where employees bypass secure systems to use public, unsanctioned LLMs, inadvertently feeding proprietary data into public training sets. Combined with the staggering environmental costs (huge water and energy consumption) and a 95% failure rate for unguided AI pilots, the financial stakes are massive. To stay on the road, businesses must pair secure software with rigorous employee training and a commitment to Responsible AI.

***

In the physical world, verglas, also widely referred to as black ice, is a thin, transparent coating of glazed ice on a surface. It is notoriously difficult to see, often appearing as nothing more than a harmless wet patch until a vehicle’s tires lose grip. By the time the driver realises the danger, the car is already spinning out of control. For businesses in 2026, AI presents a comparable hazard. It promises a smooth, fast journey toward productivity, but beneath that polished surface lie invisible patches of “Shadow AI,” data leakage, and environmental costs that can send an entire organisation into a financial and reputational skid.

The Invisible Threat: Shadow AI

The most dangerous patch of black ice is Shadow AI – the unsanctioned use of AI tools by employees without the knowledge or oversight of the IT department.

While leadership may be debating official AI strategies in the boardroom, the workforce has already moved ahead. According to a 2025 KPMG global survey, up to 58% of employees use AI productivity tools daily, yet only 41% of organisations have a formal policy guiding that use (Espria, 2025). A concerning and undeniable governance gap.

When an analyst uploads a sensitive quarterly earnings report to a free, public Large Language Model (LLM) to “summarise the key trends,” they aren’t just saving time – they are potentially feeding proprietary corporate data into a public training pool. Without a protected license or “Enterprise” tier agreement, many LLM providers reserve the right to use prompt data to train future iterations of their models. Once that data is ingested, it is effectively public; it can reappear in the outputs of a competitor’s query months later. In this scenario, the business has lost its “grip” on its most valuable asset – its data.

Good News for Salesforce Users

For organisations already operating within the Salesforce ecosystem, there is a significant safety measure: the Einstein Trust Layer. The good news for Salesforce users is that this built-in governance framework acts as a sophisticated buffer between corporate data and generative models. By utilising dynamic data masking, the Trust Layer strips away personally identifiable information (PII) and sensitive records before they ever reach an external LLM, replacing them with anonymised placeholders.

Furthermore, Salesforce’s “zero-retention” policy ensures that no data shared through the platform is stored or used by third-party providers to train their public models. However, this technical safety net only extends as far as the Salesforce platform. The persistent danger remains that employees, if not rigorously trained on AI best practices, may still copy-paste that same sensitive data into consumer-grade LLMs or unsanctioned browser extensions outside of the Salesforce environment. Without a culture of “AI Literacy,” the security of the Trust Layer can be easily bypassed by a single well-intentioned but uninformed employee looking for a shortcut.

The Environmental and Financial Toll

The “black ice” metaphor extends beyond data privacy to the hidden costs of AI infrastructure. At high-level tech events, the conversation often centres on “efficiency” and “innovation,” but the physical reality of AI is resource-heavy and carbon-intensive.

A single exchange with an LLM can consume roughly 0.26 mL of water for cooling (Online Learning Consortium, 2025). While this seems negligible, at the scale of billions of monthly queries, the impact is staggering. Microsoft and Google reported year-on-year water consumption increases of 34% and 20% respectively as they expanded their AI-ready data centers (GOV.UK, 2025). AI servers are also projected to triple their energy demand by 2028, potentially consuming enough electricity to power 28 million homes (GOV.UK, 2025).

For businesses, these aren’t just ethical concerns; they are financial ones. Governments are increasingly mandating transparent environmental reporting. Companies that ignore the “green” cost of their AI implementations today may find themselves hitting a wall of carbon taxes and regulatory fines tomorrow.

Furthermore, the financial “spin-out” from failed AI projects is already a reality. Recent data indicates that 70% to 95% of all AI pilots fail to reach full production (Medium, 2026). These failures aren’t just lost time – they represent billions in “economic vandalism.” In 2025 alone, American companies spent an estimated $644 billion on AI deployments, many of which were abandoned due to poor data quality or escalating “token” costs that were not budgeted for (Medium, 2026; Unosquare, 2026).

How to Gain Traction: Responsible AI

To navigate the black ice, businesses must shift from a “move fast and break things” mindset to one of Responsible AI. This isn’t just a buzzword; it is the winter tires and traction control of the digital age.

Responsible AI frameworks prioritise three pillars: Realisation, Reputation, and Regulation (EY, 2025).

  1. Data Sovereignty: Ensuring that every AI tool used – official or otherwise – operates under a protected license where data is masked and never used for external model training.
  2. Algorithmic Transparency: Moving away from “black box” systems toward models that offer explainability, ensuring that decisions (like hiring or credit scoring) are not biased or discriminatory.
  3. Governance by Design: Implementing “AI Gateways” that log all prompts and responses, providing the audit trail necessary to defend business decisions in court or during a regulatory audit.

The dangers of AI for businesses are rarely spectacular explosions; they are quiet, frictionless slips. Shadow AI, unprotected data sharing, and ignored environmental footprints are the patches of black ice that catch even the most sophisticated companies off guard. To survive the transition into an AI-driven economy, businesses must look past the shiny exterior of the technology and invest in the governance structures that keep them on the road.

Need help implementing responsible AI and avoiding the pitfalls that might send your data spiralling out of control? Book a call with our team to learn how we get results – the right way.

References

Espria (2025) Shadow AI: Executive Briefing on Real Risks, Business Impact and Mitigation. Available at: https://www.espria.com/resources/shadow-ai-executive-briefing-on-real-risks-business-impact-and-mitigation/ (Accessed: 30 March 2026).

EY (2025) The business case for responsible AI. Available at: https://www.ey.com/en_us/insights/ai/the-business-case-for-responsible-ai (Accessed: 30 March 2026).

GOV.UK (2025) Report: Water use in AI and Data Centres Executive summary. Available at: https://assets.publishing.service.gov.uk/media/688cb407dc6688ed50878367/Water_use_in_data_centre_and_AI_report.pdf (Accessed: 30 March 2026).

Medium (2026) How the AI Industry Created $644 Billion of Economic Vandalism in 2025. Available at: https://skooloflife.medium.com/how-the-ai-industry-created-644-billion-of-economic-vandalism-in-2025-1ca0d71ab6f2 (Accessed: 30 March 2026).

Online Learning Consortium (2025) The Real Environmental Footprint of Generative AI: What 2025 Data Tell Us. Available at: https://onlinelearningconsortium.org/olc-insights/2025/12/the-real-environmental-footprint-of-generative-ai/ (Accessed: 30 March 2026).

Unosquare (2026) AI Implementation Mistakes That Cost Millions | Avoid These Errors. Available at: https://www.unosquare.com/blog/ai-development-mistakes-that-cost-companies-millions-and-how-to-avoid-them/ (Accessed: 30 March 2026).