Back to Documentation
Reference Guides

AWS Business Requirements Mapping

From requirements to architecture: a playbook for mapping business and technical needs to AWS designs across domains like payments, KYC, data mesh, and more.

40 min read
Updated Dec 15, 2025
RequirementsMappingPlaybookDomain

AWS Architecture Decision Playbook

From Requirements → Architecture → Implementation Plan

  • Audience: Lead/Principal engineers, solution architects, tech leads

  • Scope: How to go from business + technical requirements to AWS designs for:

    1. Payment Processing
    2. ACH
    3. KYC
    4. Account Opening Systems
    5. Batch Jobs & Scheduling
    6. Service Mesh & Microservices
    7. Data Mesh & Data Platforms
    8. ETL / Data Ingestion / Data Pipelines
    9. Database Design & Technology Selection
    10. AI Agents & Agentic AI Services
    11. Real-Time Communication (RTC) Systems
    12. Other Modern System Requirements (cross‑cutting)
  • Reference frameworks:

    • AWS Well-Architected Framework (6 pillars)[1][2][3]
    • Current AWS architectures / blogs (payments, data mesh, etc.)[4][5][6][7]

0. Universal Requirements → Architecture Process

Use this same process for each domain.

0.1 Requirements Breakdown Template

For a given domain/workload:

  1. Business Need

    • What revenue/risk KPI does this support? (e.g., authorization approval rate, fraud loss, onboarding TAT)
    • Who is the primary user (customer, operations, compliance, partner)?
    • Regulatory context (PCI DSS, Nacha, KYC/AML, GDPR, SOX, HIPAA, etc.)
  2. Functional Requirements (FR)

    • Core user journeys (e.g., “submit payment → authorization → capture → settlement”)
    • System actions (validation, enrichment, routing, notifications, audit)
    • Integrations (core banking, card networks, bureaus, KYC providers, CRM, data warehouse)
  3. Non-Functional Requirements (NFR)

    • Performance: Latency, throughput (TPS), batch SLAs
    • Reliability: RPO/RTO, multi-AZ vs multi-Region, DR strategy
    • Scalability: Peak vs baseline, regional expansion
    • Security & Compliance: Data classification, encryption, authn/z, audit
    • Operability: SLOs/SLIs, runbooks, deployment model
    • Cost constraints: Target unit economics, opex/capex profile

0.2 AWS Mapping Heuristics

High-level mapping from NFRs to AWS primitives:

RequirementTypical AWS Default
< 100 ms latency, burstyAPI Gateway + Lambda + DynamoDB (or Aurora)
24×7 high TPS, stableNLB/ALB + ECS/EKS + Aurora/DynamoDB
Strict PCI / Nacha isolationSeparate accounts + VPC isolation + AWS Payment Cryptography, KMS CMKs[8]
High compliance & auditAWS Organizations, Config, CloudTrail, CloudWatch, Security Hub
Event-driven flowsEventBridge, Amazon MSK, SQS, SNS, Step Functions[4][5]
Long-running workloadECS/EKS or AWS Batch (not Lambda)[9]
Real-time stream processingKinesis / MSK + Lambda / Flink / KDA[7][10]

1. Payment Processing

1.1 Requirements Breakdown

  • Business need
    • Authorize & process card / instant payments with sub-200 ms E2E latency, high availability, PCI compliance.
    • Support multi-region, multi-tenant, high-volume workloads.[4]
  • Functional
    • Tokenization, PAN vaulting (if issuer/acquirer).
    • 3DS, fraud checks, sanctions screening, risk scoring.
    • Routing to schemes, PSPs, bank endpoints.
    • Ledger posting, settlement, chargebacks.
    • Real-time monitoring, reconciliation, reporting.
  • Non-functional
    • Latency: 50–200 ms target per transaction (intra-region).
    • Availability: ≥ 99.99%, multi-AZ, often multi-Region active/active.[5][4]
    • Compliance: PCI DSS, PCI PIN, PCI P2PE, data residency, audit, key management (HSM/APC).[8]

1.2 Domain-Specific Challenges

  • Pitfalls
    • Overcoupled monolith handling auth + fraud + ledger in one runtime → slow deployments, blast radius.
    • Database hot-spotting on transactional tables (e.g., ledger, balances).
    • Mis-handled idempotency → duplicate transactions.
    • Poor observability; difficult root cause for declines.
  • Constraints
    • PCI segmentation, regulated secrets, cryptographic key handling.
    • Network latency to card networks / RTGS / local payment schemes.

1.3 AWS Implementation Mapping

High-level pattern: Event-driven, microservice architecture with streaming backbone.[5][4]

  • Ingress
    • Amazon API Gateway (REST) or ALB → Amazon ECS/EKS/Lambda for API adapters.
    • mTLS, WAF, throttling, JWT auth (Cognito or external IdP).
  • Processing
    • Orchestration via Step Functions for multi-step flows (auth → fraud → ledger → notifications).[5]
    • Business microservices (AuthService, FraudService, RiskService, LedgerService) on:
      • ECS Fargate for containerized workloads with auto scaling.
      • EKS if you standardize on Kubernetes and need service mesh.
    • Event backbone:
      • Amazon MSK (Kafka) or Amazon Kinesis for payment lifecycle events.[4]
      • Topics/streams per stage: payment.initiated, payment.validated, payment.settled, etc.
  • Data
    • DynamoDB for idempotency keys, payment state, session data (needs low-latency & horizontal scale).
    • Aurora PostgreSQL / Aurora DSQL for ledger & relational financial data requiring ACID, complex queries.[11]
    • S3 + Lake Formation + Athena/Redshift for long-term analytics, reconciliation exports.
  • Security & keys
    • AWS Payment Cryptography + KMS + optionally CloudHSM for key management and HSM offload.[8]
    • VPC-private subnets, interface VPC endpoints, security groups with least privilege.
  • Observability
    • CloudWatch metrics, logs, alarms + X-Ray tracing across services.
    • Central dashboards for auth rates, latency, error codes, scheme-wise performance.

Example Reference Pattern (Text Diagram)

1Client → API Gateway/WAF → Ingress Service (ECS/Lambda) 2 → EventBridge / MSK "payment.initiated" 3 → Validation Service (ECS/Lambda) 4 → Fraud/Risk Service (ECS/Lambda) 5 → Routing Service (ECS/Lambda) → External schemes (via PrivateLink/VPN) 6 → Ledger Service (Aurora) 7 → Notification Service (SNS/AppSync) 8 → Events → S3 Data Lake → Athena/Redshift 9 → Metrics/Logs → CloudWatch/X-Ray

1.4 AWS Service Comparison & Decision Criteria

ConcernOptionUse WhenTradeoffs
Stateless compute for APIsLambdaSpiky traffic, low TPS, sub-15 min steps, event-driven patternsCold starts, concurrency tuning
ECS FargateSteady or high TPS, container standard, moderate infra opsHigher base cost vs Lambda at low volume
EKSOrg standardizes on K8s, complex mesh & sidecarsHighest ops/complexity
Operational storeDynamoDBSingle-digit ms latency, predictable access patterns, flexible schema, global tablesRequires careful key design
Aurora (Postgres/MySQL)ACID, joins, stored procedures, strong consistency, ledgerScaling writes needs planning
Event backboneMSKKafka ecosystem, exactly-once semantics, complex streamingRequires Kafka skillset
KinesisAWS-native, managed, easier opsShard management, limits for some use cases

1.5 Well-Architected Highlights (Example Decisions)

  • Security: PCI segmentation via isolated accounts; KMS CMKs with key policies; strict IAM, Secrets Manager for credentials.
  • Reliability: Multi-AZ, cross-Region MSK clusters, Aurora global databases; chaos testing; DLQs on all async integrations.[4][5]
  • Performance: Pre-warmed Lambdas or provisioned concurrency; partition strategies for DynamoDB/MSK.
  • Cost: Fargate spot where acceptable; MSK Serverless vs provisioned; S3 infrequent access for historical data.[4]
  • Sustainability: Right-size compute, use serverless where possible for elasticity.

1.6 Cost Patterns (Rough)

  • Small:
    • API Gateway + Lambda + DynamoDB + single-region Aurora.
    • Hundreds–low thousands USD/month.
  • Medium:
    • Multi-AZ Aurora, MSK/Kinesis, ECS Fargate microservices.
    • Low 5-figure USD/month.
  • Large/Global:
    • Multi-region active/active, MSK clusters, Aurora global, App Mesh.
    • 5–6 figures USD/month; heavy emphasis on cost observability.

2. ACH (Automated Clearing House)

2.1 Requirements Breakdown

  • Business need
    • Low-cost batch credit/debit transfers (payroll, vendor payments, bill pay).
  • Functional
    • File ingestion (Nacha / custom formats), validation, cut-offs, returns processing, prenotes.
    • Integration with ODFI/RDFI, core banking, GL/ledger, reconciliation.
  • Non-functional
    • Large file/batch handling; time‑bound SLAs (cutoffs), reliable retries.
    • Nacha compliance & audit, encryption at rest/in transit.[12]

2.2 Challenges

  • Handling large batch files (GB-scale) without timeouts.
  • Late/delayed files; partial failures.
  • Strict Nacha file format validation.
  • Operational visibility around batch status.

2.3 AWS Implementation Mapping

  • Ingress
    • SFTP endpoints → AWS Transfer Family → S3 (ACH in/out folders).
    • Direct partner integrations via VPC, DX, or VPN.
  • Orchestration
    • EventBridge rule on S3 put → Step Functions workflow:
      • Validate format (Lambda/Glue).
      • Enrich and fan-out to individual transactions.
      • Submit to ACH operators or downstream core via ECS/Lambda.
    • For huge compute / multi-hour jobs: AWS Batch or EMR vs Lambda (no 15‑min limit).[13][9][14]
  • Data
    • S3 for raw Nacha files; Glue Data Catalog for schema; Athena for queries.
    • Aurora/DynamoDB for transaction states & audit.
  • Notifications
    • SNS/SES/Slack integration for cut-off status, failures, reconciliations.

2.4 Service Choices

  • Lambda vs Batch vs EMR
    • Lambda: validation + simple transformations < 15 min, event triggered.
    • Batch: long-running, containerized transformations (e.g., 10M records), controlled concurrency.[9]
    • EMR: Spark-based transformations, joins with other datasets, advanced analytics.

3. KYC (Know Your Customer)

3.1 Requirements Breakdown

  • Business need
    • Fast, compliant onboarding with low friction, automated risk classification, and ongoing monitoring.[15][16]
  • Functional
    • Customer data & document capture.
    • Identity verification (ID, biometrics), PEP/sanctions screens.
    • Risk scoring, approvals, manual review workflows.
  • Non-functional
    • High throughput onboarding (10k/hour+), secure document storage, strict PII protection.[17]

3.2 Challenges

  • Document handling at scale (images, PDFs).
  • Third-party KYC provider integrations.
  • Continuous KYC (periodic refresh, event-based triggers).
  • Data residency and privacy regulations.

3.3 AWS Mapping

  • API layer
    • API Gateway / ALB → ECS/EKS/Fargate microservice (KYC API) or Lambda for simpler workloads.[17]
  • Storage
    • PII & KYC profile: RDS/Aurora (structured), DynamoDB (if very high scale).
    • Documents: S3 with bucket policies, encryption, Object Lock (immute if needed).
  • Background checks
    • EventBridge events on KYC_SUBMITTED → Step Functions →
      Lambda tasks:
      • Call external providers (document verification, sanctions lists).
      • Run internal risk rules.
      • Persist results, raise tasks for manual review.
  • GenAI-based KYC
    • Amazon Bedrock + knowledge bases for policy document parsing, narrative generation, missing data prompts.[15]
  • Secrets & config
    • Secrets Manager for provider keys; SSM Parameter Store for configs.

4. Account Opening Systems

4.1 Requirements Breakdown

  • Business
    • Multi-channel (web, mobile, assisted) account opening with KYC, funding, product selection, and regulatory disclosures.[16]
  • Functional
    • Product catalogue, eligibility, KYC integration, funding (card/ACH), consent management, document e-sign, welcome journeys.
  • Non-functional
    • Low-latency UX, strong consistency for account numbers, idempotent operations.

4.2 AWS Mapping

  • Architecture
    • Microservices: ApplicationService, EligibilityService, KYCService, FundingService, CoreIntegrationService.
    • ECS/EKS (or Lambda for smaller shops), API Gateway for north-south traffic.
  • Data
    • Aurora PostgreSQL for application + account data; DynamoDB for sessions.
    • S3 for application documents; S3/Glue/Athena for analytics & funnel metrics.
  • Workflows
    • Step Functions for multi-step account opening workflows (including compensation for failed steps).
  • SaaS / multi-tenant patterns
    • Use SaaS Lens of Well-Architected for tenant onboarding and isolation.[18]

5. Batch Jobs & Scheduling

5.1 Requirements Breakdown

  • Business
    • End-of-day postings, reports, settlements, ETL, daily risk reports.
  • Functional
    • Scheduled, dependency-aware batch workflows.
  • Non-functional
    • Predictable SLAs, robust retries, cost-efficient execution.

5.2 AWS Mapping & Service Choices

RequirementBest-Fit Service
Simple cron, short taskEventBridge schedule + Lambda
Long-running, containerized jobsAWS Batch[9]
Complex workflows with branches/retriesStep Functions
Data-centric batch ETLAWS Glue Jobs

Pattern: EventBridge (scheduler) → Step Functions → (Lambda + Batch / Glue) pipeline.[14][13]


6. Service Mesh & Microservices

6.1 Requirements Breakdown

  • Business
    • Faster delivery, independent scaling, resilience, regional deployments.
  • Functional
    • Service discovery, routing, retries, canary deployments, mTLS.
  • Non-functional
    • Observability, zero-trust internal comms, minimal app code changes.

6.2 AWS Mapping

  • Foundational
    • ECS or EKS for workloads.
    • Service discovery via Cloud Map or K8s DNS.
  • Service mesh
    • AWS App Mesh for sidecar-based mesh (Envoy).[19][20][21]
    • Features: traffic splitting, retries, timeouts, circuit breaking, metrics & tracing.
  • Observability
    • Telemetry from sidecars to CloudWatch, X-Ray, or OpenTelemetry collector.[20][22]

7. Data Mesh & Data Platforms

7.1 Requirements Breakdown

  • Business
    • Data as a product, decentralized domain ownership, cross-domain analytics.[6][7]
  • Functional
    • Multiple domain data products, data cataloging, data access governance.
  • Non-functional
    • Inter-domain SLAs, lineage, governance, cost transparency.

7.2 AWS Mapping

  • Lake House foundation
    • S3-based data lake; Lake Formation for governance & access control.[6]
    • Glue Data Catalog for metadata; Athena, Redshift, EMR/Spark for processing.
  • Data mesh pattern
    • Each domain:
      • Owns an S3 bucket(s) & Glue catalog DB.
      • Publishes datasets with schemas and contracts.
    • Central governance account with Lake Formation & catalog for cross-domain discovery.[6]
  • Event-driven data mesh
    • MSK/Kinesis for domain events; consumer domains build derived data products.[7]

8. ETL / Data Ingestion / Data Pipelines

8.1 Requirements Breakdown

  • Ingestion patterns
    • Batch (files, DB dumps), CDC, real-time streams, API-based ingestion.
  • Transformations
    • Cleaning, normalization, enrichment, feature computation.

8.2 AWS Mapping & Comparisons

Use CaseService(s)Notes
Managed ETL with SparkAWS GlueLarge-scale batch ETL, data catalog, job bookmarks.[23][10]
Light event processingLambdaSmall, real-time, event-triggered transforms.[23]
Stream ingestionKinesis Data Streams/Firehose, MSKFirehose → S3/Redshift; KDS/MSK for custom consumers.[10]
CDC from DBsDMSMigration or ongoing CDC into S3/Redshift/RDS.

Typical pattern:
Source → (Kinesis/MSK/DMS/Transfer) → S3 Data Lake → Glue/EMR → Athena/Redshift


9. Database Design & Technology Selection

9.1 Requirements → DB Engine

Key questions:

  1. Access pattern known and stable?
  2. Need strong relational modeling & joins?
  3. Global scale / multi-region writes?
  4. Predictable vs spiky workloads?

9.2 Quick Decision Table (2025)

ScenarioRecommended DB
Traditional OLTP, known schemaRDS (Postgres/MySQL)
High-scale OLTP with read replicas, HAAurora
Global, multi-region OLTP with strong consistencyAurora DSQL[11]
High-scale key/value with flexible schemaDynamoDB
Analytics, columnar, warehouseRedshift
Ad-hoc queries over S3Athena
Time series, IoTTimestream

Guides: AWS Database Decision Guides.[24][25][26]


10. AI Agents & Agentic AI Services

10.1 Requirements Breakdown

  • Business
    • Automate complex workflows (onboarding, support, ops runbooks) via agents.
  • Functional
    • Tools/API calling, RAG, multi-agent collaboration, governance.
  • Non-functional
    • Security & PII handling, latency, auditability, guardrails.

10.2 AWS Mapping (2025)

  • Core platform: Amazon Bedrock Agents & AgentCore, Bedrock Flows, Knowledge Bases, Guardrails.[27][28][29]
  • Architecture
    • Agent orchestrator (Bedrock AgentCore).
    • Tools implemented as HTTP APIs or Lambda/ECS services.
    • Vector store: managed knowledge bases (e.g., built on OpenSearch Serverless, Aurora, etc.).
  • Patterns
    • Supervisor agent → domain-specific agents (KYC, Payments, Support).
    • Integration into existing backend via REST/gRPC APIs, EventBridge events.

11. Real-Time Communication (RTC) Systems

11.1 Requirements Breakdown

  • Business
    • Chat, notifications, live dashboards, trading UIs, collaborative tools.
  • Functional
    • Low-latency pub/sub, presence, message ordering (where needed).
  • Non-functional
    • Manage thousands–millions of concurrent connections, secure multi-tenant.

11.2 AWS Patterns

  • For GraphQL/APIs
    • AWS AppSync with GraphQL subscriptions over WebSockets.[30][31]
    • Backed by DynamoDB/Lambda/S3 as data sources.
  • For generic WebSockets
    • API Gateway WebSocket APIs + Lambda for handlers.
  • Notifications
    • SNS for fan-out; EventBridge for routing events; push via AppSync or mobile push.

12. Cross-Cutting Modern Requirements

12.1 Well-Architected Integration

Use the 6 pillars as a checklist for each design.[2][32]

Example Architectural Decisions per Pillar:

  • Operational Excellence
    • IaC (Terraform/CloudFormation/CDK), CI/CD, runbooks, game days.
  • Security
    • Multi-account isolation (AWS Organizations), IAM least privilege, KMS everywhere, centralized logging, Security Hub.
  • Reliability
    • Multi-AZ default; multi-Region for critical workloads; DR runbooks; SLOs/SLIs.
  • Performance Efficiency
    • Load/perf tests, autoscaling, caching (CloudFront, ElastiCache), right-sizing.
  • Cost Optimization
    • Cost allocation tags, Cost Explorer, savings plans, control planes around scale-down.
  • Sustainability
    • Prefer managed/serverless; optimize idle resources.

12.2 Security Best Practices (Quick List)

  • IAM roles for workloads; no long-lived user keys.
  • KMS CMKs for all sensitive data; envelope encryption.
  • VPC-private access to data services; use VPC endpoints.
  • WAF + Shield for externally exposed APIs.
  • Secrets Manager & Parameter Store for secrets/config.

12.3 Observability

  • CloudWatch metrics & logs everywhere; X-Ray traces.
  • Centralized log account with S3 + Athena + OpenSearch for search.
  • Alerts on SLOs (latency, error rates, saturation), not just CPU.

12.4 CI/CD & IaC

  • CI: GitHub Actions / CodeBuild, unit & integration tests.
  • CD: CodePipeline/ArgoCD/Spinnaker (depending on stack).
  • IaC: CDK, CloudFormation, or Terraform; module library per domain (payments, KYC, etc.).

13. Reference IaC Snippets (Illustrative)

These are intentionally minimal; in a real playbook you’d template these per domain.

13.1 Terraform – Example VPC + ECS Service Skeleton

1resource "aws_vpc" "main" { 2 cidr_block = "10.0.0.0/16" 3} 4 5resource "aws_subnet" "private_a" { 6 vpc_id = aws_vpc.main.id 7 cidr_block = "10.0.1.0/24" 8 availability_zone = "us-east-1a" 9} 10 11resource "aws_ecs_cluster" "payments" { 12 name = "payments-cluster" 13} 14 15resource "aws_ecs_task_definition" "auth_service" { 16 family = "auth-service" 17 network_mode = "awsvpc" 18 requires_compatibilities = ["FARGATE"] 19 cpu = "1024" 20 memory = "2048" 21 22 container_definitions = jsonencode([ 23 { 24 name = "auth-service" 25 image = "123456789012.dkr.ecr.us-east-1.amazonaws.com/auth:latest" 26 essential = true 27 portMappings = [ 28 { 29 containerPort = 8080 30 } 31 ] 32 logConfiguration = { 33 logDriver = "awslogs" 34 options = { 35 "awslogs-group" = "/ecs/auth-service" 36 "awslogs-region" = "us-east-1" 37 "awslogs-stream-prefix" = "ecs" 38 } 39 } 40 } 41 ]) 42} 43 44resource "aws_ecs_service" "auth_service" { 45 name = "auth-service" 46 cluster = aws_ecs_cluster.payments.id 47 task_definition = aws_ecs_task_definition.auth_service.arn 48 desired_count = 3 49 launch_type = "FARGATE" 50 network_configuration { 51 subnets = [aws_subnet.private_a.id] 52 assign_public_ip = false 53 security_groups = [aws_security_group.payments_sg.id] 54 } 55}

13.2 CDK (TypeScript) – Simple API Gateway + Lambda

1import * as cdk from 'aws-cdk-lib'; 2import * as lambda from 'aws-cdk-lib/aws-lambda'; 3import * as apigw from 'aws-cdk-lib/aws-apigateway'; 4 5export class PaymentsApiStack extends cdk.Stack { 6 constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { 7 super(scope, id, props); 8 9 const authFn = new lambda.Function(this, 'AuthFn', { 10 runtime: lambda.Runtime.NODEJS_20_X, 11 handler: 'index.handler', 12 code: lambda.Code.fromAsset('lambda/auth'), 13 environment: { 14 TABLE_NAME: 'payments-table', 15 }, 16 }); 17 18 const api = new apigw.RestApi(this, 'PaymentsApi', { 19 restApiName: 'Payments Service', 20 }); 21 22 const payments = api.root.addResource('payments'); 23 payments.addMethod('POST', new apigw.LambdaIntegration(authFn)); 24 } 25}

14. Checklists & Decision Matrix

14.1 Architecture Decision Checklist (Per Workload)

  1. Requirements

    • Business KPIs and regulatory context captured.
    • Functional flows documented (happy + unhappy paths).
    • NFRs quantified (latency, TPS, RPO/RTO).
  2. Security & Compliance

    • Data classified; encryption requirements defined.
    • IAM design reviewed; least privilege enforced.
    • Compliance controls mapped (PCI, KYC/AML, Nacha, etc.).
  3. Compute

    • Execution profile (event-driven vs steady) analyzed.
    • Lambda vs ECS vs EKS vs Batch decision documented.
    • Autoscaling policies defined.
  4. Data

    • Access patterns known and mapped to DB choice (RDS/Aurora/DynamoDB/Redshift).
    • Backup & DR plan defined.
    • Data lifecycle & retention in place.
  5. Integration & Events

    • Pub/sub vs point-to-point decision made.
    • Event schema versioning approach defined.
  6. Observability

    • Logging, metrics, tracing design.
    • SLOs & alert thresholds defined.
  7. Cost

    • Capacity estimates & unit cost modeled.
    • Savings Plans/RIs/spot opportunities identified.
  8. Well-Architected

    • Workload reviewed against 6 pillars.
    • High-risk issues captured with remediation plan.

14.2 Service Selection Decision Matrix (Lambda / ECS / EKS / Batch)

CriterionLambdaECS FargateEKSBatch
Workload durationms–15 minSeconds–daysSeconds–daysMinutes–days
Traffic patternSpiky, event-drivenSteady or spikySteady, complexScheduled / on-demand batch
Ops overheadLowMediumHighMedium
Ecosystem need (K8s)NoNoYesNo (containerized)
Pricing modelPer-request, durationvCPU/GB-hourNode hoursvCPU/GB-hour
Use for paymentsLight APIs, async tasksCore services, real-time flowsMesh + complex multi-tenantHeavy offline jobs, reports

15. References & Further Reading

  • Well-Architected Framework:
    • AWS Well-Architected home[2]
    • Overviews of 6 pillars[3][32][1]
  • Payments & ACH:
    • Real-time payment orchestration on AWS[4]
    • Event-driven payment systems guidance[5]
    • ACH processing basics[12]
  • KYC & Onboarding:
    • Digital KYC with GenAI on AWS[15]
    • Digital onboarding architecture[16]
  • Service Mesh:
    • AWS service mesh overview[21]
    • App Mesh with ECS/EKS[19][20]
  • Data Mesh / Platforms:
    • Data mesh with Lake Formation & Glue[6]
    • Event-driven data mesh[7]
  • Batch & ETL:
    • AWS Batch patterns & Lambda comparison[23][10][13][9]
  • Databases:
    • Aurora vs RDS vs DynamoDB guides[25][26][24][11]
  • AI Agents:
    • Amazon Bedrock Agents & AgentCore[28][29][27]
  • Real-Time Communication:
    • AppSync WebSocket subscriptions[31][30]

If you’d like, I can next turn this into a more formal internal “Architecture Handbook” structure with per-domain example ADRs (Architecture Decision Records) and a few concrete end-to-end blueprints (e.g., “Instant Payments Platform on AWS”, “KYC+Account Opening Platform on AWS”) you can use with teams or in training sessions.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

Back to all documentation
Last updated: Dec 15, 2025