EU AI Act Compliance

Last Updated: 9th January 2025

Overview

We are committed to ensuring our AI-powered platform aligns with the EU AI Act. As a B2B SaaS company generating marketing content for e-commerce, our use case falls under low-risk AI systems.

Our Services are not intended for use in high-risk AI applications (e.g., biometric identification, credit scoring, employment decisions, or law enforcement), nor do we support use in prohibited applications under the EU AI Act.

Our Commitments

  • Transparency: We clearly indicate when content is AI-generated.
  • Safety: We monitor and filter outputs to avoid harmful, illegal, or misleading content.
  • User Control: Users can opt out of AI-generated content where applicable.
  • Feedback: Users can report problematic outputs for review.
  • Documentation: We maintain internal records of model usage, provider details, and compliance practices.

Model Usage

We use licensed third-party LLMs (e.g., OpenAI's GPT) via secure APIs. We do not develop or fine-tune foundational models ourselves.

Surgent does not retain, reuse, or repurpose AI-generated content or user inputs for training or secondary purposes. We do not fine-tune models. Third-party AI providers (e.g., OpenAI) only receive data as needed to generate responses and are contractually restricted from using it to train their models unless separately agreed to by the user.

Where our AI Services process personal data (e.g., user prompts, inputs, or logs), we rely on the following GDPR legal bases depending on context: (i) contractual necessity to fulfill the service requested by users, (ii) legitimate interests to improve service functionality without overriding user rights, and (iii) explicit consent, where required.

Our AI systems are used to assist users by generating content. They do not make autonomous decisions on behalf of users or third parties.

Responsible Party

Andy Curlton is the designated person responsible for AI compliance and oversight. For questions or concerns, please contact support@surgent.ai attn: Andy.

Risk Mitigation Measures

Although our AI systems are classified as low-risk, we implement the following measures to maintain compliance and protect users:

  • Regular audits of AI system outputs for bias and harmful content
  • Clear labeling of AI-generated content
  • Human oversight of AI systems and their outputs
  • Robust data protection measures in line with GDPR requirements
  • A process for users to flag problematic AI outputs

Compliance Framework

Our compliance framework includes:

  • Regular risk assessments of our AI systems
  • Technical documentation of our AI systems and their capabilities
  • Documented incident response procedures
  • Regular staff training on AI ethics and compliance
  • Vendor assessments for third-party AI providers

Ongoing Monitoring

We review our AI systems and processes regularly to ensure continued compliance as the EU AI Act evolves. This includes:

  • Tracking regulatory developments and guidance
  • Updating internal processes as needed
  • Consulting with legal experts on compliance requirements
  • Conducting periodic compliance audits

This statement will be revised as final EU AI Act obligations take effect following implementation deadlines expected in 2026.

User Rights

As a user of our AI-powered services, you have the right to:

  • Know when you are interacting with an AI system
  • Understand how your data is used to generate AI outputs
  • Report concerning AI outputs
  • Request human review of important decisions
  • Our AI is used to assist, not replace, human decision-making. All outputs are user-directed and subject to user control.

Contact Information

For questions or concerns about our EU AI Act compliance practices, please contact us at:

Email: support@surgent.ai

Attn: Andy Curlton, AI Compliance Lead