AI Transparency Statement
AI Transparency Statement Last Updated: January 2026
At Probative Edge Ltd, we are committed to the responsible and transparent use of Artificial Intelligence (AI). In alignment with global best practices and the European Union AI Act, we are providing this statement to ensure you understand how we use AI technology in our services.
1. You are Interacting with an AI System Please be aware that Vyoma Ops is powered by an Artificial Intelligence system. It is not a human. While it is designed to be helpful and conversational, it processes information based on patterns and data rather than human understanding or empathy.
2. Intended Purpose & Capabilities We use this AI system to:
Enrich Task Data (Gap Analysis): Continuously analyse incoming facilities management tickets to identify missing information ("gaps") and automatically surface relevant context from your existing asset registers and CAFM records
Orchestrate Workflows: Recognise Task IDs across different communication channels (e.g., Email, Teams) to link scattered conversations into a single, action-ready shared workspace.
Monitor Risk & Compliance: Analyse decisions and actions in real-time to flag potential scope creep, commercial risk, or cost leakage before they escalate.
3. Limitations & Accuracy AI systems can make mistakes. Please note the following limitations:
Possibility of Errors: The system may occasionally produce incorrect, inaccurate, or biased information ("hallucinations").
No Professional Advice: Information provided by this AI should not be considered engineering/legal/medical/financial] advice. Always consult a qualified professional for sensitive matters.
Human Decision Required: The system creates an "auditable decision trail" and suggests next steps, but it does not execute physical maintenance or authorise final commercial payments autonomously.
Verification: We recommend that you verify any factual claims made by the AI before relying on them.
Data Dependency: The accuracy of the AI's insights depends on the quality of the historical data in your underlying systems (CAFM/CMMS).
Possibility of Errors: While our model uses specific guardrails per module, AI may occasionally produce incorrect context or misinterpret complex nuance. Users should verify critical data
4. Human Oversight While the AI operates autonomously, our team regularly reviews a sample of interactions for quality assurance and a human agent is available to take escalate an AI recommendation.
5. Data Privacy Your interactions with this AI recommendations are stored for training and quality improvement purposes. We process your data in accordance with our Privacy Policy and GDPR requirements. We do not use your inputs to train third-party models.
6. Contact Us If you have concerns about an AI interaction or believe the system has made a serious error, please contact us at Fraser@probativeedge.co.uk