Compliance

How Bedrock maps to FCA regulation

The FCA does not mandate a specific compliance tool for AI-assisted advice. What it does mandate — through the Consumer Duty, SM&CR, SYSC, and data protection law — is a standard of evidence, accountability, and oversight that most firms cannot meet with existing systems. This page maps each regulatory requirement to the specific Bedrock feature that addresses it, with direct references to FCA rules and guidance.

All FCA references are drawn from the FCA AI Update (2024), the FCA Handbook, and the BoE/FCA AI Survey (2024).

Consumer Duty

Data-backed evidence of good outcomes

The requirement

The Consumer Duty requires firms to produce an annual assessment, evidenced with data, of whether they are delivering good outcomes for retail customers. For firms using AI to assist with advice, this means demonstrating — with hard evidence, not just process documentation — that AI-assisted advice actually led to suitable outcomes. A written policy stating "we review AI outputs" is not sufficient; firms need measurable proof.

FCA references

PRIN 2A — The Consumer Duty

Requires firms to act to deliver good outcomes for retail customers.

FCA AI Update, Para 3.43

"At least annually, a firm's board, or equivalent governing body, should review and approve an assessment, evidenced with data, of whether the firm is delivering good outcomes for its customers."

PS22/9 — Consumer Duty Policy Statement

Firms must be able to demonstrate that they are meeting the Duty's requirements, including through monitoring data.

The risk without Bedrock

Without structured outcome data, the annual board assessment becomes a box-ticking exercise based on anecdotal evidence. If the FCA challenges it, the firm cannot prove its AI-assisted advice was suitable.

How Bedrock solves this

Bedrock records every piece of AI-assisted advice the moment it enters the system, along with the review outcome (approved, modified, rejected), reviewer identity, and reasoning. At reporting time, firms can generate a complete data-backed assessment showing: how many pieces of advice were reviewed, approval/rejection rates, modification patterns, SLA compliance, and reviewer coverage — all from verifiable, tamper-proof records.

  • Automated annual compliance reporting from ledger data
  • Outcome metrics: approval rates, rejection rates, modification frequency
  • SLA compliance tracking across all review jobs
  • Reviewer workload and coverage analysis
  • Exportable reports for board presentation

Consumer Duty & Principles

Human-in-the-loop for AI-generated advice

The requirement

While the FCA does not use the phrase "human-in-the-loop," the combined effect of the Consumer Duty, Principle 2 (due skill, care and diligence), and Principle 9 (suitability of advice) creates an effective requirement for human oversight of AI-generated financial advice. AI cannot be the sole decision-maker on suitability — a qualified human must review the advice before it reaches the client. Critically, this review must be evidenced, not assumed.

FCA references

Principle 2 — Skill, Care and Diligence

A firm must conduct its business with due skill, care and diligence.

Principle 9 — Customers: Relationships of Trust (Suitability)

A firm must take reasonable care to ensure the suitability of its advice.

PRIN 2A.2.2R — Consumer Duty (Good Faith)

Firms must act in good faith toward retail customers, characterised by honesty, fair and open dealing.

FCA AI Update, Para 3.45

"Firms that use AI as part of their business operations remain responsible for ensuring compliance with our rules, including in relation to consumer protection."

The risk without Bedrock

If AI-generated advice reaches a client without documented human review and the advice turns out to be unsuitable, the firm has no defence. The FCA will ask: "Who reviewed this? When? What did they check?" Without evidence, the firm is exposed.

How Bedrock solves this

Bedrock Principal automatically routes AI-generated advice to FCA-authorised reviewers. Every review action — the reviewer opening the document, their assessment, annotations, and final decision — is recorded with timestamps and reviewer identity. The advice cannot be delivered to the client until a human has reviewed and approved it, and the entire chain of events is preserved in the immutable ledger.

  • Automatic routing of AI advice to qualified reviewers
  • Reviewer identity verified against FCA register
  • Every review action timestamped and recorded
  • Approval, modification, and rejection workflows
  • Advice cannot bypass review — enforced by the platform

SM&CR

Senior manager personal accountability for AI

The requirement

The Senior Managers and Certification Regime requires that one or more Senior Management Function holders have overall responsibility for each activity, business area, and management function of the firm. AI use in relation to any of these falls within scope. Senior managers are subject to the Conduct Rule requiring them to take reasonable steps to ensure the business they are responsible for is effectively controlled. This means a named individual is personally liable for AI failures — and they need evidence that controls were in place.

FCA references

FCA AI Update, Para 3.40

"The Senior Managers and Certification Regime (SM&CR) emphasises senior management accountability and is relevant to the safe and responsible use of AI."

FCA AI Update, Para 3.41

"All Senior Managers in SM&CR firms are required to have a Statement of Responsibilities... They are also subject to the Senior Manager Conduct Rules, including requiring Senior Managers to take reasonable steps to ensure that the business of the firm, for which they are responsible, is effectively controlled."

SMF24 — Chief Operations Function

Technology systems, including AI, are normally under the responsibility of SMF24.

SMF4 — Chief Risk Function

Has responsibility for overall management of risk controls, including AI risk exposures.

The risk without Bedrock

When an AI system produces unsuitable advice, the FCA will ask which senior manager was responsible and what steps they took to control the risk. If the answer is "we had a policy" but no evidence of enforcement, the senior manager is personally exposed under the Conduct Rules.

How Bedrock solves this

Bedrock provides senior managers with a real-time dashboard showing the status of every piece of AI-assisted advice: how many are pending review, how many were approved or rejected, SLA compliance rates, and reviewer activity. The immutable ledger serves as evidence that controls were not just written down but actively enforced. If challenged, the senior manager can demonstrate that every piece of AI advice was reviewed by a qualified human before reaching a client.

  • Senior management dashboard with real-time oversight metrics
  • Evidence that review controls are actively enforced, not just documented
  • Complete audit trail for regulatory investigations
  • Exportable evidence packages for SM&CR compliance
  • Chain verification proving records have not been tampered with

SYSC & Consumer Duty

Audit trail for AI-assisted decisions

The requirement

The SYSC sourcebook requires firms to have "sound administrative and accounting procedures and effective control and safeguard arrangements for information processing systems." For AI-assisted advice, this means maintaining a complete audit trail of: what data the AI used, which model or system produced the advice, who reviewed the output, what modifications were made, and the rationale for the final decision. This trail must be reliable — meaning it cannot be retroactively altered.

FCA references

SYSC 4.1.1R — General Organisational Requirements

"A firm must have robust governance arrangements, which include a clear organisational structure with well defined, transparent and consistent lines of responsibility, effective processes to identify, manage, monitor, and report the risks it is or might be exposed to, and internal control mechanisms, including sound administrative and accounting procedures and effective control and safeguard arrangements for information processing systems."

SYSC 9.1.1R — Record-Keeping

A firm must arrange for orderly records to be kept of its business and internal organisation, including all services and transactions undertaken by it.

FCA AI Update, Para 3.9

Notes "a range of high-level principles-based rules, as well as more detailed rules and guidance, that will be relevant to a firm's safe, secure and robust use of AI systems."

The risk without Bedrock

Most firms store compliance records in CRM systems, shared drives, and email threads. These records are editable, can be backdated, and cannot be independently verified. In an FCA investigation, the integrity of these records can be challenged, undermining the firm's entire defence.

How Bedrock solves this

Every record in the Bedrock Ledger is immutable — it cannot be changed, deleted, or backdated after it is written. Each record receives a SHA-256 hash and Ed25519 digital signature, and is hash-chained to the previous record. If anyone alters a past record, the chain breaks and the tampering is automatically detectable. Records include: the original advice document hash, metadata (client reference, document type, adviser identity), reviewer identity and decision, timestamps, and the full chain of custody.

  • SHA-256 document hashing for tamper detection
  • Ed25519 digital signatures proving record authenticity
  • Hash-chained records forming an unbreakable verification chain
  • Append-only storage — records cannot be altered or deleted
  • Independent chain verification available to any party

Consumer Duty & Vulnerability Guidance

Protecting vulnerable customers in AI interactions

The requirement

The Consumer Duty requires firms to take account of the different needs of their customers, including those with characteristics of vulnerability. The FCA's Vulnerability Guidance expects firms to consider vulnerable consumers at all stages of product and service design, including where the service relies on AI. In practice, this means AI systems must not inadvertently disadvantage vulnerable customers, and where AI identifies potential vulnerability indicators, the interaction should be escalated to a human.

FCA references

PRIN 2A — Consumer Duty

Requires firms to take account of the different needs of their customers, including those with characteristics of vulnerability and protected characteristics.

FG21/1 — Guidance for firms on fair treatment of vulnerable customers

"Firms should implement processes to evaluate where they have not met the needs of vulnerable consumers so that they can make improvements."

FCA AI Update, Para 3.28-3.29

"This includes where the product or service is heavily reliant on an AI or data solution. The Guidance sets out that firms should implement processes to evaluate where they have not met the needs of vulnerable consumers."

The risk without Bedrock

AI systems may produce advice that is technically suitable but fails to account for a client's vulnerability — for example, recommending a complex product to someone who has indicated they find financial decisions stressful. If the firm cannot show it had processes to catch this, it breaches the Duty.

How Bedrock solves this

Bedrock allows firms to tag advice records with client vulnerability indicators as part of the submission metadata. When vulnerability flags are present, Principal can automatically apply enhanced review requirements — routing to specialist reviewers, requiring additional checklist items, or flagging for senior oversight. The ledger records whether vulnerability was considered and what additional steps were taken, providing evidence for the FCA that the firm's processes actively protected vulnerable customers.

  • Vulnerability metadata tagging on advice records
  • Enhanced review routing for flagged clients
  • Specialist reviewer assignment for vulnerability cases
  • Additional checklist requirements for high-risk interactions
  • Evidence trail showing vulnerability was identified and addressed

Transparency & Consumer Duty

Explainability of AI-driven decisions

The requirement

The FCA expects firms to be able to explain the basis of their AI-driven decisions. While the Consumer Duty does not prescribe technical explainability requirements, the obligation to act in good faith (PRIN 2A.2.2R) and to communicate clearly with customers (Principle 7) means firms must be able to articulate why AI-assisted advice was given. A "black box" defence — claiming the firm does not understand why its AI reached a conclusion — is incompatible with the Duty and the suitability requirement under Principle 9.

FCA references

PRIN 2A.2.2R — Good Faith

Characterised by honesty, fair and open dealing with retail consumers.

Principle 7 — Communications

A firm must pay due regard to the information needs of its clients and communicate in a way that is clear, fair and not misleading.

FCA AI Update, Para 3.34-3.36

"AI systems should be appropriately transparent and explainable... Related rules under the Consumer Duty on consumer understanding refer to meeting the information needs of retail customers."

UK GDPR, Articles 13-14

Data controllers must provide information about automated decision-making, including "meaningful information about the logic involved."

The risk without Bedrock

If a client or the FCA asks why a particular piece of advice was given, the firm needs to explain the reasoning chain. If all they have is "the AI recommended it," they cannot meet the explainability standard.

How Bedrock solves this

Bedrock does not make AI models explainable — that is the responsibility of the AI provider. What Bedrock does is record the human layer of explainability: the reviewer's assessment, their reasoning for approving or modifying the advice, and any annotations they added. This creates a documented reasoning chain from AI output to human decision to client delivery. If the FCA asks "why was this advice given?", the firm can show: what the AI produced, who reviewed it, what they checked, and why they approved it.

  • Reviewer annotations and reasoning captured per review
  • Structured checklist completion recorded in the ledger
  • Link between AI output and human decision documented
  • Modification history showing what changed and why
  • Exportable reasoning chain for regulatory inquiries

Data Protection

Automated decision-making safeguards

The requirement

Under Article 22 of the UK GDPR, data subjects have the right not to be subject to decisions based solely on automated processing which produce legal or similarly significant effects. For AI-assisted financial advice, this means firms must ensure that a human is meaningfully involved in the decision — not just rubber-stamping an AI output. The firm must also be able to provide "meaningful information about the logic involved" in the automated processing.

FCA references

UK GDPR, Article 22

Data subjects have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects.

FCA AI Update, Para 3.32

"The safeguards on automated decision making under Article 22 UK GDPR... provides data subjects with the right not to be subject to decisions based solely on automated processing."

FCA AI Update, Para 3.37

"Data controllers must provide data subjects with certain information about their processing activities, including the existence of automated decision-making and profiling."

The risk without Bedrock

If AI advice is delivered without meaningful human involvement, the firm may breach Article 22. "Meaningful" means the human actually reviewed the substance — not just clicked "approve." Without evidence of the review's depth, the firm is vulnerable.

How Bedrock solves this

Bedrock Principal enforces meaningful human review by requiring reviewers to complete structured checklists, demonstrate engagement with the document (time spent, scroll depth), and provide written rationale for their decision. The ledger records all of this, providing evidence that human involvement was substantive, not nominal. This directly addresses the Article 22 requirement for non-solely-automated decision-making.

  • Structured review checklists requiring substantive engagement
  • Time-on-document and scroll depth tracking
  • Written rationale required for each review decision
  • Evidence distinguishing meaningful review from rubber-stamping
  • Exportable review depth metrics for data protection compliance

Operational Resilience

Operational resilience for AI-dependent services

The requirement

Under SYSC 15A, firms must ensure their Important Business Services remain within impact tolerances under severe but plausible scenarios. If a firm's advice process depends on AI, that AI system supports an Important Business Service. Firms need to plan for: AI model failures, third-party AI provider outages, and situations where AI outputs are unreliable. The advice process must continue to function even when the AI does not.

FCA references

SYSC 15A — Operational Resilience

Requires firms to ensure Important Business Services remain within impact tolerances under severe but plausible scenarios.

FCA AI Update, Para 3.13-3.14

"The FCA's work on operational resilience, outsourcing and CTPs is also of particular relevance... The requirements under SYSC 15A would include a firm's use of AI where it supports an IBS."

FCA AI Update, Para 4.4

"Recent developments, such as the rapid rise of Large Language Models (LLMs), for example, put resilience at the heart of what we do."

The risk without Bedrock

If the firm's AI system fails and they have no way to process, record, or review advice without it, they are operationally exposed. The FCA expects firms to have identified this as a risk and planned for it.

How Bedrock solves this

Bedrock operates independently of the firm's AI system. It receives advice documents via API regardless of which AI system (or human) produced them. If the firm's AI provider goes down, advisers can still submit documents manually, and the review and ledger recording process continues uninterrupted. If Bedrock itself experiences disruption, the platform provides a real-time backup status endpoint that firms can monitor, verifying that ledger record counts match between the primary database and immutable storage, with cryptographic spot-checks confirming record integrity. Documented recovery time objectives and automated failover procedures ensure the service returns within defined impact tolerances.

  • Platform-agnostic — works with any AI provider or manual submission
  • Continues operating if the firm's AI system is unavailable
  • Real-time backup status monitoring with integrity verification
  • Automated ledger backups with cryptographic spot-checks
  • Documented recovery time objectives and failover procedures
  • Independent infrastructure — no single points of failure with the firm's systems

Outsourcing & Third Parties

Third-party AI provider oversight

The requirement

The FCA's outsourcing requirements under SYSC 8 require firms to take reasonable steps to avoid undue operational risks when outsourcing critical functions. The 2024 BoE/FCA survey found that a third of all AI use cases are third-party implementations — up from 17% in 2022. Firms using third-party AI to generate advice must demonstrate oversight of those third-party outputs, including monitoring the quality and suitability of the AI's work product.

FCA references

SYSC 8 — Outsourcing

Requires firms to take reasonable steps to avoid undue operational risks when outsourcing critical functions.

FG 16/5 — Cloud and Third-Party IT Services

Guidance on firms outsourcing to cloud and other third-party IT services.

FCA AI Update, Para 3.17-3.18

Notes concerns about concentration of third-party technology services and risks from Big Tech partnerships.

BoE/FCA AI Survey 2024

"A third of all AI use cases are third-party implementations... greater than the 17% we found in the 2022 survey."

The risk without Bedrock

If a firm relies on a third-party AI provider to generate advice and that AI produces unsuitable recommendations, the firm — not the AI provider — is liable. The firm must show it had adequate oversight of the third party's outputs.

How Bedrock solves this

Bedrock sits between the AI provider and the client, regardless of which third party produced the AI output. Every piece of advice passes through the same review and recording process. This gives the firm a consistent, verifiable oversight layer across all AI providers — whether they use one system or many. If the firm changes AI providers, the audit trail continues unbroken.

  • Provider-agnostic — records advice from any AI system
  • Consistent review standards across all third-party AI providers
  • Advice quality metrics tracked per provider for oversight
  • Unbroken audit trail even when switching AI providers
  • Evidence of third-party output monitoring for SYSC 8 compliance

Contestability & Redress

Consumer complaints and redress for AI decisions

The requirement

The FCA requires firms to maintain complaints handling procedures to ensure complaints are handled fairly and promptly, including complaints about AI-driven decisions. If a client challenges the suitability of AI-assisted advice, the firm must be able to reconstruct exactly what happened: what the AI produced, who reviewed it, what was approved, and when. Without this, the firm cannot fairly investigate the complaint.

FCA references

DISP 1 — Complaints Handling

Rules and guidance on how firms should deal with complaints, including complaints about AI decisions concerning financial services.

FCA AI Update, Para 3.45-3.46

"Where a firm's use of AI results in a breach of our rules... there are a range of mechanisms through which firms can be held accountable and through which consumers can get redress."

FCA AI Update, Para 3.44

"Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful."

The risk without Bedrock

Client complaints about AI-assisted advice will increase as AI adoption grows. If the firm cannot reconstruct the full chain of events — from AI output to human review to client delivery — they cannot fairly investigate the complaint, leaving them exposed to FOS referrals and potential redress.

How Bedrock solves this

Bedrock preserves the complete chain of custody for every piece of advice. When a complaint is received, the firm can instantly retrieve: the original advice document (by hash), who reviewed it and when, what the reviewer decided, any modifications made, and the certificate of completion. This allows fair, thorough investigation of complaints with verifiable evidence. If the complaint escalates to the Financial Ombudsman Service — where the FOS independently assesses whether the firm acted fairly — the same tamper-proof evidence package can be submitted directly, giving the firm a defensible position that editable records cannot provide.

  • Instant retrieval of any advice record by reference or date
  • Complete chain of custody from submission to certificate
  • Tamper-proof evidence that cannot be disputed
  • Exportable case files for complaints investigation
  • Independent verification available to FOS if needed

One platform, 10 regulatory requirements

Bedrock doesn't replace your compliance team. It gives them the infrastructure to prove what they're already doing — with evidence that regulators, auditors, and clients can independently verify.