AI Policy
Meetric Conversation Intelligence Platform
Definitions
For the purposes of this policy, these terms are defined as follows:
AI-Generated Content: Any output automatically produced by our AI systems, including transcriptions, summaries, analytics, and insights.
Personal Data: Any information relating to an identified or identifiable person, including but not limited to names, contact details, voice recordings, and professional information captured in conversations.
Pseudonymization: The process of replacing identifying information with artificial identifiers ("tokens") that maintain analytical value while protecting individual privacy. Original identifying information is stored separately and securely.
Source Material: Original recordings, transcripts, or text data from which AI insights are derived. This serves as the verifiable basis for all AI-generated analysis.
Human Oversight: Direct human supervision and review of AI system outputs, including the ability to verify, override, or correct AI-generated insights.
Insights: Patterns, trends, or observations automatically identified by our AI systems, always presented with clear links to supporting evidence in source materials.
User: Any person authorized to access and use the platform's AI capabilities, including employees, administrators, and designated third parties where applicable.
1. System Classification and Scope
1.1 System Description
Meetric provides an AI-enhanced conversation intelligence platform that processes audio & video recordings and textual data like email and chat from customer and internal business conversations. The AI functionalities we employ include but are not limited to:
- Automated speech-to-text transcription
- Speaker identification and diarization
- Natural language processing for conversation analysis
- Automated summarization
- Conversation pattern recognition
- AI analysis of process fulfilment in conversations
Our AI capabilities combine proprietary systems that we host and maintain with select third-party Large Language Models (LLMs). The balance between in-house and third-party technologies varies by application, and we continuously evaluate and adapt this mix based on evolving needs and available alternatives.
1.2 Risk Classification
Our platform's risk classification varies based on specific use contexts and implementation:
Limited Risk Use Cases
These are our standard applications where the AI system serves as a supportive tool without direct impact on decisions:
- Meeting transcription to create searchable text records of conversations
- Basic conversation analytics identifying topics and themes
- Process adherence insights with clear references to source material
- Team communication pattern analysis for improvement opportunities
Elevated Risk Contexts
While our platform remains a supportive analytical tool, additional safeguards are implemented when used in:
- Employee development contexts where insights may inform training needs
- Quality assurance scenarios where conversation patterns are analyzed
- Process compliance monitoring where adherence to procedures is tracked
- Sales or customer service settings where interaction patterns are studied
Risk Management Approach
Our platform maintains strict operational boundaries:
- Functions solely as an analytical tool without autonomous decision-making
- All insights link directly to source material for verification
- Serves in a supportive capacity to human decision-makers
- Provides transparency in how conclusions are reached
- Maintaining clear "human-in-the-loop" design principles
- Ensuring traceability between insights and source data
- Implementing mandatory human review checkpoints
To ensure responsible deployment:
- Regular risk assessments for all use cases
- Comprehensive safeguards applied proactively
- Compliance with high-risk system requirements where applicable
- Continuous monitoring and adjustment of safety measures
2. Technical Specifications and Safeguards
2.1 Data Processing Framework
Our system processes personal data in the following manner:
- All audio data is processed within EU-based infrastructure
- Data sent to third-party LLMs undergoes a comprehensive pseudonymization process that includes:
- Replacement of all personally identifiable information (PII) with unique tokens
- Automated detection and masking of:
- Names, email addresses, and phone numbers
- Employee IDs and user identifiers
- Customer account numbers and reference codes
- Physical addresses and location data
- Financial and transaction details
- Context-aware identification and replacement of indirect identifiers
- Verification checks to ensure no PII patterns remain in the processed text
- Maintenance of a secure mapping table for token restoration, stored separately from the pseudonymized data
- Regular audits of pseudonymization effectiveness
The pseudonymized data maintains analytical utility while ensuring no individual can be identified from the data sent to third-party LLMs alone, without access to the separately stored mapping information.
- Third-party LLM providers are contractually prohibited from:
- Using processed data to train or improve their AI models
- Storing our data beyond the immediate processing need
- Using our data for any purpose other than delivering the requested analysis
- AI Policy
- Processing occurs in isolated environments with strict access controls
- Data minimization principles are applied throughout
- Retention periods are clearly defined and automatically enforced
- All insights and analysis are directly linked to source material through precise timestamps
- The system maintains a verifiable chain of evidence by always referencing original recordings and transcriptions
- No autonomous decisions are made by the platform; all insights are presented as suggestions with clear links to supporting evidence
- Users retain full control over decision-making, with the platform serving purely as an analytical tool
2.2 Risk Management System
We maintain a comprehensive risk management system that includes:
- Regular algorithmic impact assessments
- Continuous monitoring of system outputs
- Bias detection and mitigation procedures
- Regular security vulnerability assessments
- Incident response protocols
2.3 Data Governance
Personal data handling includes:
- End-to-end encryption for all data in transit and at rest
- Automated anonymization where applicable
- Role-based access control
- Data minimization by design
- Clear data deletion procedures
3. Transparency and Oversight
3.1 Documentation Requirements
We maintain detailed documentation of:
- System architecture and components
- Training data sources and validation procedures
- Risk assessment methodologies
- Testing and validation results
- Incident reports and resolution measures
- Change management logs
3.2 Human Oversight
Our human oversight framework ensures:
- Clear allocation of oversight responsibilities
- Regular review of system outputs
- Authority to override system decisions
- Continuous training for oversight personnel
- Clear escalation procedures
3.3 User Information
We provide users with:
- Clear identification of AI-generated content
- Explanation of system capabilities and limitations
- Information about human oversight options
- Guidelines for responsible system use
- Regular updates about system changes
4. Quality Management
4.1 Quality Measures
Our quality management system includes:
- Regular accuracy assessments
- Bias monitoring and mitigation
- Performance benchmarking
- Regular system audits
- Continuous improvement protocols
4.2 Monitoring and Reporting
We implement:
- Automated system monitoring
- Regular compliance assessments
- Incident tracking and reporting
- Performance metrics tracking
- User feedback collection and analysis
5. Compliance Commitment
5.1 Regulatory Updates
We commit to:
- Regular review of regulatory requirements
- Proactive compliance updates
- Engagement with regulatory bodies
- Industry standard alignment
- Transparent communication about changes
And maintain comprehensive compliance with:
- EU AI Act requirements as detailed in this policy
- GDPR requirements as detailed in our DPA
- Emerging regulatory frameworks
For detailed GDPR compliance measures and data protection protocols, please refer to our Data Protection Agreement.
5.2 User Support
We provide:
- Dedicated compliance support
- Regular training materials
- Clear usage guidelines
- Prompt incident response
- Regular compliance updates
6. Declaration of Conformity
We declare that:
- This system has been assessed for compliance with the EU AI Act
- Appropriate risk management measures are in place
- Regular monitoring and updates are performed
- Documentation is maintained and available for audit
- Human oversight is implemented as required
Last Updated: 2025-02-19