Transaction Monitoring Policies & Procedures (Bridge-Integrated)
1. Purpose and objectives
This Transaction Monitoring Policies & Procedures (the “TM Policy”) establishes the Company’s minimum requirements for (i) monitoring covered activity, (ii) generating and triaging alerts, (iii) investigating and documenting cases, and (iv) escalating suspected suspicious activity to the appropriate filing/executing party (e.g., Bridge) in a controlled and auditable manner.
This TM Policy is written to support the Company’s AML/CTF program, including U.S. Bank Secrecy Act (BSA) / FinCEN obligations where applicable. See FinCEN BSA statutes and regulations hub: https://www.fincen.gov/resources/statutes-and-regulations/bank-secrecy-act.
2. Scope
This TM Policy applies to:
- All users and accounts within the Program.
- All value-moving events, including deposits, purchases/sales, conversions, withdrawals, payouts, transfers, and refunds/returns.
- User behavior and operational signals that may indicate illicit activity (device/IP anomalies, account changes, repeated failed KYC, chargebacks, support complaints).
3. Operating model and allocation of responsibilities
Transaction monitoring responsibilities must be clearly allocated between the Company and Bridge (and other vendors) and documented in a RACI.
In many Bridge-enabled models:
- Bridge performs transaction monitoring and transaction-level compliance actions for Bridge services as part of its compliance program.
- The Company performs monitoring over Company-controlled signals (UI behavior, account lifecycle, device/fraud signals) and escalates relevant suspicious activity to Bridge promptly.
See Bridge developer agreement: Bridge Developer Agreement.
4. Data sources and instrumentation
The Company maintains an inventory of data sources used for monitoring, including:
- User profile and KYC/KYB data: onboarding attributes, risk rating, refresh events.
- Transaction events: orders, payouts, status changes, returns, reversals, fees.
- Behavioral and device signals: login anomalies, IP geolocation, device fingerprint, velocity.
- Wallet/address data (if applicable): deposit/withdrawal addresses, blockchain analytics risk indicators.
- Partner notifications: Bridge webhooks/events, PSP/bank alerts, fraud/chargeback notifications.
- Support signals: complaints, disputes, suspicious narratives, coercion indicators.
All monitoring-relevant events must be logged with integrity controls and retained per record retention policy.
5. Monitoring approach and risk-based coverage
5.1. Risk segmentation
Monitoring coverage (thresholds, frequency, and required review) is risk-based and considers:
- user risk rating,
- product enabled (payouts, cross-border, custody),
- geography and counterparties,
- historical behavior, prior alerts and outcomes.
5.2. Types of monitoring
The Company employs layered monitoring:
- Rules/scenarios: deterministic scenarios and thresholds for known typologies.
- Behavioral analytics: anomaly detection based on expected behavior patterns (where implemented).
- Case-driven monitoring: enhanced monitoring for users under review or under restriction.
5.3. Alert severity
Alerts are categorized (e.g., Low/Medium/High/Critical) to drive SLAs and escalation requirements.
6. Scenarios and typologies (non-exhaustive)
The Company maintains a controlled library of scenarios with documented rationale, parameters, and owner. Common scenarios may include:
- Structuring / smurfing: repeated transactions just below thresholds; burst patterns.
- Velocity anomalies: sudden spikes in volume/frequency inconsistent with profile.
- Rapid in-and-out: funds in → immediate conversion → payout (layering indicator).
- Multiple accounts / identity reuse: shared devices/emails/phones; linked identities.
- High-risk geography exposure: IP/user/residence mismatch; sanctioned/high-risk regions (coordinate with sanctions policy).
- Third-party funding / payout: inconsistent beneficiary names; unusual funding sources (where data available).
- Chargeback / return patterns: repeated returns, fraud indicators on fiat rails.
- Wallet risk indicators (if applicable): exposure to sanctioned entities/mixers/darknet services.
- Behavioral red flags: coercion, mule behavior indicators, scripted patterns.
Scenario parameters and thresholds are configured in a separate internal control document (or appendix) to prevent exposing operational thresholds in publicly distributed documents.
7. Alert workflow (triage, investigation, disposition)
7.1. Triage
Upon alert creation, analysts must:
- confirm data completeness and alert validity,
- review user profile, risk rating, and historical activity,
- determine whether the alert can be cleared as false positive with rationale, or requires investigation.
7.2. Investigation steps
Investigations should include (as applicable):
- activity timeline reconstruction,
- comparison against expected user behavior and stated purpose,
- review of linked accounts and device signals,
- review of counterparties and payout destinations (where available),
- review of sanctions/PEP/adverse media triggers,
- review of blockchain analytics signals (where applicable),
- requests for information (RFI) to user when appropriate and permitted.
7.3. Dispositions
Every alert/case must end in a documented disposition, such as:
- No issue / false positive
- Inconclusive – monitor
- Policy violation – enforce (limits, suspension, termination)
- Suspicious – escalate to Bridge
- Suspicious – report (where Company is responsible and applicable)
7.4. Case documentation standards
Case records must capture:
- unique case ID, dates, analyst, reviewer,
- trigger and data reviewed,
- actions taken (holds, restrictions, RFIs),
- decision rationale,
- approvals,
- communications and escalations (including to Bridge),
- evidence attachments and links to logs.
7.5. Minimum case file contents (evidence standard)
Each case file shall include, at minimum (as applicable):
- User profile snapshot: KYC/KYB status, risk rating and reason codes, key identifiers, historical alerts.
- Transaction/event timeline: normalized list of events (type, amount, asset/rail, timestamps, status).
- Link analysis: related accounts/devices/identities and relationship rationale.
- Counterparty data: payout destinations, beneficiary name match checks (where available).
- Open-source or vendor signals: sanctions/PEP/adverse media flags; blockchain analytics flags (if applicable).
- Analyst narrative: concise fact pattern, red flags observed, and basis for disposition.
- Reviewer approval: 4-eyes review for high-severity cases and all “suspicious” dispositions.
8. Escalation to Bridge
When suspicious activity is identified and Bridge is the executing monitoring/filing party for Bridge services:
- the Company escalates promptly via the agreed channel with a standardized package:
- user identifiers,
- summarized narrative of concern,
- transactions/events involved (timestamps, IDs, amounts, rails/assets),
- any supporting evidence (screenshots/logs, device signals, user communications),
- recommended actions (hold/suspend/EDD) where appropriate.
Escalations must be tracked for closure and outcomes.
8A. Escalation standard (SAR-support package)
Where Bridge is the SAR filing party, the Company shall provide an escalation package sufficient to support a SAR-quality narrative, including:
- who: user identity details available to the Company (and business ownership/authorized users where applicable),
- what: activity summary and typology (structuring, rapid in/out, mule indicators, etc.),
- when: timeline with timestamps and unique transaction/event IDs,
- where: geographies, IP/device location indicators, corridor details (where available),
- why: basis for suspicion (red flags) and why activity is inconsistent with expected profile,
- supporting evidence: logs, screenshots, user communications, vendor signals.
8B. Escalation SLAs and severity
The Company shall define and enforce SLAs by severity (configured in the Program Parameters and Definitions), including:
- Critical: same-day escalation (or within hours) where sanctions exposure, fraud rings, or high-impact patterns are suspected.
- High: escalation within 1 business day.
- Medium/Low: escalation within defined timeframes based on queue capacity and risk.
9. Controls: limits, holds, and enforcement actions
The Company maintains risk-based controls that may include:
- velocity limits by user risk tier,
- limits on payout destinations and changes (cooldown periods),
- manual review gates for higher-risk activity,
- temporary holds pending review,
- account suspension/termination consistent with contractual responsibilities and applicable law.
10. Quality assurance, tuning, and model governance
10.1. QA program
The Company operates a QA program that:
- samples cleared and escalated alerts,
- assesses decision quality and documentation completeness,
- measures false positive/false negative indicators,
- identifies training needs and control gaps.
10.2. Tuning cadence
Scenario thresholds and logic are reviewed on a defined cadence and after major events:
- new product/geography launch,
- material fraud/ML incident,
- partner/regulator feedback,
- significant changes in alert volumes.
All tuning changes are change-controlled with approval, testing, and backtesting where feasible.
11. Metrics and management reporting
The Company tracks and reviews TM metrics, including:
- alert volumes by scenario and risk tier,
- time-to-triage and time-to-close,
- escalation counts and outcomes,
- enforcement actions,
- QA results (error rates, documentation gaps),
- system uptime and event ingestion latency.
12. Training
TM personnel and relevant stakeholders receive training covering:
- typologies relevant to the Program’s products,
- investigation techniques and documentation standards,
- escalation to Bridge and confidentiality considerations,
- sanctions and fraud coordination.
13. Record retention and confidentiality
TM records are retained per retention policy and protected by:
- least-privilege access controls,
- audit logs,
- secure storage and transmission.
Confidentiality requirements (including restrictions on tipping-off where applicable) must be followed.
14. Policy governance
- Effective date: See the Program Parameters and Definitions (Program metadata).
- Owner: Program owner / Compliance Officer (see the Program Parameters and Definitions).
- Review: at least annually and upon material change.
- Exceptions: documented and approved by Compliance with compensating controls.
