Back to Learn

Technical Implementation Patterns: Matching Solutions to Requirements

A framework for selecting the right technical approach—low-code, custom development, integrations, or AI—based on your documented requirements.

PickleLlama Team
November 26, 2024
19 min read
ImplementationArchitectureStrategy

The Bottom Line

This report assumes you've already done the hard work. You've selected a problem worth solving using the FIVES framework. You've documented requirements through Jobs-to-be-Done and mapped the current process with Value Stream analysis. You've made deliberate build-versus-buy decisions using the 7 C's. Now comes the question that determines whether all that preparation pays off: which technical approach actually fits what you've learned?

The answer is rarely a single technology. Most mid-market automation initiatives combine multiple approaches—low-code platforms handling standard workflows, custom code for differentiating logic, purpose-built integrations connecting systems, and AI applications for tasks requiring judgment within defined boundaries. The skill lies in knowing which pattern applies where.

This report examines four implementation categories through the lens of use cases and architecture patterns, then provides a decision framework for matching your documented requirements to the right technical approach.


Category 1: Low-code workflow platforms

What this means: Platforms like n8n, Make, Zapier, and Power Automate that connect applications through pre-built integrations, enabling automation without traditional software development. You configure rather than code, assembling workflows from existing components.

The core value proposition: Speed to deployment. A workflow that would require weeks of custom development—API authentication, error handling, retry logic, monitoring—deploys in hours or days because the platform has already solved those problems for common integrations.

Use cases where low-code excels

Cross-application data synchronization represents the sweet spot. When a new customer signs in your CRM, create corresponding records in your accounting system, add them to your email platform, notify the sales team in Slack, and log the activity. Each individual integration is straightforward; the value comes from orchestrating them reliably without maintaining custom code for each connection.

Document and notification workflows leverage low-code strengths effectively. Route incoming emails to appropriate queues based on content analysis. Generate documents from templates when deals close. Send reminders when tasks age beyond thresholds. Alert managers when metrics cross boundaries. These workflows follow predictable patterns with well-defined triggers and actions.

Scheduled reporting and data preparation fits naturally. Pull data from multiple sources nightly, transform and aggregate it, deposit results in a data warehouse or reporting tool. The logic is straightforward; the value is reliable execution without babysitting scripts.

Lead routing and qualification demonstrates low-code handling moderate complexity. Score incoming leads based on firmographic data, route to appropriate sales representatives based on territory and capacity, create follow-up tasks, update CRM stages. The business rules are clear even if the branching logic gets elaborate.

Architecture patterns for low-code

Hub-and-spoke integration places the low-code platform at the center, connecting to surrounding applications through native connectors. Works well when most integrations involve the same core systems (CRM, email, accounting) and data flows are relatively simple. The platform becomes your integration layer.

Event-driven triggers respond to changes in source systems. A new record, an updated field, a webhook from an external service—each triggers a workflow that propagates changes or initiates downstream processes. This pattern handles most real-time synchronization needs without polling overhead.

Scheduled batch processing runs workflows on time-based triggers for operations where real-time isn't necessary. Nightly data consolidation, weekly report generation, monthly cleanup tasks. Simpler to debug than event-driven flows and appropriate when latency tolerance exists.

Human-in-the-loop workflows pause for approval or input before continuing. Route a purchase request for manager approval, wait for sign-off, then proceed with procurement steps. Low-code platforms handle the state management and notification logic that makes these workflows reliable.

Where low-code breaks down

Complex conditional logic overwhelms visual workflow builders. When decision trees exceed 5-7 branches, when conditions depend on data from multiple sources evaluated together, when business rules require calculations beyond basic arithmetic—the visual interface becomes harder to maintain than code would be.

High-volume, low-latency requirements hit platform limits. Zapier processes records sequentially with minimum 1-minute polling intervals. Make and n8n handle higher throughput but still impose execution time limits. If you need to process thousands of events per minute with sub-second response times, you've outgrown low-code for that specific workflow.

Sophisticated error recovery requiring business context exceeds platform capabilities. Low-code platforms retry failed steps and can branch on errors, but they can't make nuanced decisions about whether a failure in step 7 means rolling back steps 3-6, notifying a specific person based on the data involved, and attempting an alternative approach. That requires custom logic.

Version control and deployment management remains primitive. Testing changes against production data, maintaining separate development and staging environments, rolling back deployments—these software engineering fundamentals barely exist in most low-code platforms. n8n Enterprise offers Git integration, but it's the exception.


Category 2: Purpose-built traditional software

What this means: Custom applications built with conventional programming languages and frameworks—React frontends, Python or Node.js backends, PostgreSQL databases—deployed on cloud infrastructure. You write code to implement business logic precisely as your requirements specify.

The core value proposition: Precision and control. When your competitive advantage depends on exactly how a process works, custom software lets you implement that logic without compromise. You're not constrained by what a platform vendor decided to support.

Use cases where custom development excels

Proprietary business logic that differentiates you demands custom development. The pricing algorithm that accounts for factors competitors don't consider. The scheduling optimization that makes your service more efficient than alternatives. The workflow that embodies hard-won operational knowledge. These are the "build" decisions from your 7 C's analysis—capabilities central to your value proposition.

High-throughput transaction processing requires the performance characteristics custom code provides. Processing thousands of orders per minute, handling real-time inventory updates across locations, managing financial transactions with strict consistency requirements. The control over database design, caching strategy, and processing architecture determines whether the system performs.

Complex multi-step processes with interdependencies fit custom development when the logic exceeds what low-code can express clearly. Manufacturing execution systems tracking work-in-progress through dozens of stations. Loan origination workflows with conditional document requirements based on applicant profiles. Supply chain orchestration coordinating multiple vendors with different capabilities and constraints.

User-facing applications with specific UX requirements benefit from custom frontend development. When the interface itself is part of your product—a customer portal, a specialized tool for your operations team, a client-facing dashboard—custom development delivers experiences that template-based tools cannot match.

Architecture patterns for custom software

The modular monolith represents the recommended starting point for most mid-market applications. A single deployable unit with well-defined internal module boundaries. Simpler to develop, deploy, and debug than distributed systems, while maintaining the option to extract services later if specific modules need independent scaling.

API-first design structures applications around well-defined interfaces from the start. Even if you're building a monolith, designing clear API boundaries between modules enables future flexibility—swapping implementations, exposing functionality to partners, or integrating with other systems. Document APIs with OpenAPI specifications; this documentation becomes an asset.

Event-driven architecture decouples components through asynchronous message passing. When an order is placed, publish an event; interested services (inventory, fulfillment, notifications) subscribe and react independently. This pattern handles complexity well but requires investment in message infrastructure (Kafka, SQS, RabbitMQ) and adds operational overhead appropriate only when the decoupling benefits justify it.

Container-based deployment on managed services (AWS ECS/Fargate, Google Cloud Run, Azure Container Apps) provides production-grade infrastructure without Kubernetes complexity. Containers offer consistency between development and production, straightforward horizontal scaling, and clean deployment semantics. For most mid-market applications, managed container services provide sufficient capability without requiring dedicated platform engineering.

Where custom development is overkill

Standard integrations between common systems waste development resources. Building a custom Salesforce-to-QuickBooks sync when Zapier or a native integration handles it reliably means you're maintaining code for a solved problem. Reserve custom development for logic that doesn't exist elsewhere.

Workflows that change frequently suffer under custom development's longer change cycles. If business users need to modify routing rules weekly, building a custom rules engine is more expensive than using a low-code platform they can adjust themselves. Custom development suits stable logic; volatile rules need platforms designed for non-technical modification.

Prototypes and experiments shouldn't start with production-grade architecture. The 50-75% cost reduction from AI-assisted development has made prototyping cheaper, but low-code platforms remain faster for validating whether an automation delivers value before investing in custom implementation.


Category 3: Purpose-built integrations

What this means: Custom code specifically designed to connect systems—moving data between applications, synchronizing state, transforming formats, and bridging capabilities that neither low-code platforms nor source systems provide natively. The integration itself is the product.

The core value proposition: Control over the connection logic. When the integration requires transformation, validation, or orchestration beyond what pre-built connectors offer—but the connected systems themselves don't need custom development—purpose-built integration provides surgical precision.

Use cases where purpose-built integration excels

Legacy system connectivity often requires custom integration. That AS/400 running your inventory system doesn't have a REST API or Zapier connector. A 15-year-old ERP exposes data through ODBC but requires specific query patterns to avoid performance issues. Custom integration code adapts to these constraints while presenting clean interfaces to modern systems.

Complex data transformation pipelines exceed what low-code transformation steps handle elegantly. Normalizing addresses across different country formats. Matching customer records across systems with inconsistent identifiers. Converting between industry-specific data standards. When transformation logic is substantial, dedicated integration code is more maintainable than embedded low-code expressions.

High-reliability data synchronization with strict consistency requirements benefits from purpose-built integration. Financial reconciliation that must handle edge cases correctly. Inventory synchronization where discrepancies have real cost. Order management where dropped or duplicated messages create customer problems. Custom integration code allows implementation of exactly the reliability guarantees your requirements specify.

Bridging low-code platforms to specialized systems extends platform reach. Your low-code workflow handles 80% of a process, but one step requires calling a proprietary API with complex authentication, or processing data through a library only available in Python. A small custom integration service handles that step, called by the workflow via webhook.

Architecture patterns for integration

API gateway pattern centralizes integration logic behind a unified interface. External systems call your gateway; it routes requests to appropriate backends, handles authentication, transforms payloads, and manages rate limiting. This pattern is particularly valuable when multiple consumers need access to the same integrations with different requirements.

Change Data Capture (CDC) streams database changes to downstream systems without modifying source applications. Tools like Debezium read database transaction logs, publishing row-level changes to Kafka or similar message systems. Consumers react to changes in near-real-time without polling overhead. Essential for keeping systems synchronized when you can't modify the source application to emit events.

ELT pipeline architecture (Extract-Load-Transform) moves data to a central warehouse before transforming it. Modern data stacks use tools like Airbyte or Fivetran for extraction and loading, with dbt handling transformation in the warehouse using SQL. This pattern suits analytics and reporting use cases where you need to combine data from multiple sources.

Webhook relay services provide reliability for event-driven integrations. Rather than having source systems call destination webhooks directly (which fails silently if the destination is down), a relay service receives webhooks, persists them, and delivers with retry logic. This pattern is particularly valuable when connecting SaaS applications that don't provide delivery guarantees.

Where purpose-built integration adds unnecessary complexity

Standard SaaS-to-SaaS connections don't need custom integration. If Zapier or Make has working connectors for both systems and your transformation requirements are simple, use them. The maintenance burden of custom integration code isn't justified for commodity connectivity.

Integration challenges that are actually data quality problems aren't solved by better integration code. If customer records don't match across systems because of inconsistent data entry, building more sophisticated matching logic treats symptoms rather than causes. Fix the data quality process before investing in integration complexity.

One-time data migrations rarely justify building reusable integration infrastructure. For a single migration, scripts with manual validation often work better than engineered pipelines you'll never use again. Reserve integration investment for ongoing operational needs.


Category 4: AI-driven applications

What this means: Applications that use large language models (LLMs) to perform tasks requiring interpretation, judgment, or generation—within defined boundaries. These aren't autonomous agents making independent decisions; they're tools that handle specific types of work where traditional rule-based systems would be too rigid but human processing would be too slow or expensive.

The core value proposition: Handling variability. Traditional automation requires explicitly programming every case. AI applications handle variations—different phrasings, unexpected formats, edge cases—that would require endless rules to address procedurally. The key constraint: the task must have clear documentation and evaluation criteria so you can verify the AI is performing correctly.

Use cases where AI-driven applications excel

Document processing and extraction demonstrates AI's strength with unstructured input. Invoices arriving in different formats from different vendors. Contracts with varying clause structures. Resumes with inconsistent organization. Traditional automation requires templates for each format; AI applications handle variation naturally, extracting structured data from unstructured documents with 90%+ accuracy for well-defined fields.

Customer communication assistance applies AI within clear boundaries. Draft responses to customer inquiries using knowledge base content. Classify incoming tickets by topic, urgency, and appropriate team. Summarize long email threads for quick review. The AI handles natural language variation while humans retain decision authority for anything consequential.

Knowledge base question answering connects users to existing documentation. Rather than searching through hundreds of articles, users ask questions in natural language; the system retrieves relevant content and synthesizes answers with citations. Works well for internal knowledge management, customer self-service, and technical documentation—anywhere a substantial body of documented knowledge exists.

Data enrichment and normalization uses AI for tasks requiring interpretation. Categorize products into a taxonomy based on descriptions. Standardize company names and addresses across inconsistent records. Extract structured data from semi-structured sources. These tasks involve judgment that traditional parsing can't capture, but the expected output format is well-defined.

Content generation within templates produces first drafts for human refinement. Product descriptions from specifications. Report sections from data summaries. Email drafts from bullet points. The AI handles the blank-page problem; humans edit and approve the output.

Architecture patterns for AI applications

RAG (Retrieval-Augmented Generation) grounds AI responses in your organization's documents. When a user asks a question, the system first retrieves relevant content from your knowledge base, then generates a response that synthesizes the retrieved information. This pattern dramatically reduces hallucination by keeping the AI focused on your actual documentation rather than its training data.

Structured output with validation ensures AI responses conform to expected formats. Modern LLMs can generate JSON matching specified schemas. Validate output against schemas before processing; reject and retry malformed responses. This pattern makes AI output reliable enough for automated downstream processing—essential for integration with other systems.

Human-in-the-loop workflows keep humans authoritative for consequential decisions. AI processes input and presents recommendations; humans review and approve before actions execute. This pattern captures AI's efficiency benefits while maintaining accountability. Essential for any application where errors have significant consequences.

Model routing for cost optimization uses inexpensive models for simple tasks, reserving expensive models for complex ones. A classifier determines query complexity; straightforward questions go to fast, cheap models while nuanced questions go to more capable (and expensive) ones. Organizations implementing routing report 70-90% cost reduction compared to using expensive models for everything.

Where AI applications fail

Tasks requiring factual accuracy without verification invite hallucination. LLMs confidently generate plausible-sounding incorrect information. Any application where factual accuracy matters must either ground responses in retrieved documents (RAG) or include human verification before the output matters.

Fully autonomous decision-making without guardrails fails in production. The more independent authority you give an AI system, the more spectacular its failures become. Documented examples include AI agents deleting production databases despite explicit restrictions, making incorrect financial decisions, and generating harmful content when manipulated. Keep humans in the loop for anything consequential.

Tasks without clear evaluation criteria can't be deployed reliably. If you can't define what a correct response looks like, you can't validate the system is working. AI applications need measurable success criteria just like traditional software.

High-stakes decisions with legal or safety implications require human judgment. AI can assist—surfacing relevant information, drafting recommendations, flagging anomalies—but humans must make decisions affecting health, safety, legal compliance, or significant financial outcomes.


Decision framework: matching requirements to approaches

The frameworks from earlier reports—FIVES for problem selection, JTBD for requirements, Value Stream Mapping for process documentation—produce artifacts that directly inform technical approach selection. This framework maps those requirement characteristics to appropriate implementation patterns.

Primary decision factors

Process variability determines the boundary between traditional and AI-driven approaches.

| Variability Level | Characteristics | Recommended Approach | |-------------------|-----------------|---------------------| | Low | Same inputs, same steps, same outputs every time | Low-code workflows or traditional automation | | Moderate | Defined variations with clear rules for handling each | Low-code with branching or custom code | | High within bounds | Variable inputs but consistent expected output format | AI-driven with structured output and validation | | Unbounded | No predictable pattern to inputs or expected outputs | Requires human processing; AI can assist but not replace |

Integration complexity determines whether low-code platforms suffice or purpose-built integration is necessary.

| Complexity Level | Characteristics | Recommended Approach | |------------------|-----------------|---------------------| | Standard | Common SaaS applications with existing connectors | Low-code platform native integrations | | Custom authentication or protocols | Non-standard APIs, legacy systems, proprietary formats | Purpose-built integration service | | High-volume or low-latency | Thousands of events per minute, sub-second requirements | Purpose-built integration with appropriate infrastructure | | Multiple systems with complex orchestration | Dependencies between integrations, transaction coordination | Purpose-built integration or custom application |

Business logic ownership determines whether custom development is justified.

| Ownership Level | Characteristics | Recommended Approach | |-----------------|-----------------|---------------------| | Commodity | Standard process that many businesses perform similarly | Buy or use low-code; don't build | | Configured commodity | Standard process with your specific parameters | Low-code or configurable purchased solution | | Differentiated | Logic that creates competitive advantage | Custom development for core logic; can use low-code for surrounding workflow | | Core IP | The logic is your business | Custom development with full control |

Requirement-to-approach mapping

Using artifacts from your requirements gathering process, evaluate each automation candidate:

From JTBD job statements, identify:

  • What triggers the job (event-driven vs. scheduled vs. user-initiated)
  • What outcome defines success (measurable criteria for validation)
  • What constraints apply (latency, accuracy, compliance)

From Value Stream Mapping, identify:

  • Current process steps and their cycle times
  • Decision points and the logic governing each
  • Exception paths and their frequency
  • Data sources and destinations

From the 7 C's analysis, identify:

  • Whether this capability is core (build) or context (buy)
  • Complexity level and available internal competence
  • Integration requirements with existing systems

Hybrid patterns for common scenarios

Most real implementations combine approaches. These patterns appear frequently:

Low-code orchestration with custom steps: The workflow lives in n8n or Make, handling triggers, routing, and standard integrations. Specific steps call custom services via webhook for proprietary logic. You get low-code's speed for 80% of the workflow with custom precision where it matters.

Custom application with AI-assisted processing: A purpose-built application handles the core workflow, but specific steps use LLM calls for tasks like classification, extraction, or content generation. The custom code validates AI outputs and handles errors; the AI provides capability the application couldn't otherwise achieve.

ELT pipeline feeding AI applications: Airbyte extracts data from source systems into a warehouse. dbt transforms and prepares it. An AI application consumes the clean, structured data to generate insights, answer questions, or produce reports. Each layer does what it's good at.

API gateway unifying integration approaches: A custom API gateway presents a clean interface to your systems. Behind it, some routes go to low-code workflows, some to custom services, some to AI applications. Consumers don't need to know which approach handles their request; you can change implementation without affecting integrations.

Cost and timeline expectations by approach

| Approach | Initial Build | Monthly Run Cost | Change Velocity | |----------|--------------|------------------|-----------------| | Low-code workflow | Days to weeks | $50-$1,000 per workflow | Hours to days for modifications | | Purpose-built integration | 2-8 weeks | $200-$2,000 infrastructure | Days to weeks for modifications | | Custom application (MVP) | 6-12 weeks | $500-$5,000+ infrastructure | Weeks for significant changes | | AI application (pilot) | 4-8 weeks | $200-$2,000 LLM + infrastructure | Days to weeks for modifications |

These ranges assume AI-assisted development for custom work. Costs scale with complexity, volume, and reliability requirements. Use these as starting points for detailed estimation once you've selected an approach.


Applying the framework

The value of structured requirements gathering becomes clear at implementation selection. A JTBD statement like "When an invoice arrives from a vendor, I want to extract key fields and validate against our purchase orders so I can process payment without manual data entry" immediately suggests:

  • Trigger: Event-driven (invoice arrival)
  • Variability: High within bounds (different invoice formats, consistent expected output)
  • Logic: Not differentiating (standard accounts payable process)
  • Integration: Connects email/document storage to ERP

This maps to: AI-driven document extraction (for the variability in invoice formats) → validation logic in low-code or simple custom codeERP integration via existing connector or purpose-built integration depending on ERP capabilities.

Contrast with: "When a customer requests a quote, I want to generate pricing using our proprietary margin calculation that accounts for customer history, current inventory, and competitive positioning so I can respond quickly without involving a pricing analyst." This suggests:

  • Trigger: User-initiated or event-driven
  • Variability: Moderate (defined inputs, calculation logic)
  • Logic: Differentiating (proprietary pricing methodology)
  • Integration: Needs CRM data, inventory data, possibly competitive intelligence

This maps to: Custom development for the pricing logic (it's differentiating) with purpose-built integration to pull required data from source systems, possibly exposed through an API that a low-code workflow calls to orchestrate the customer response.

The frameworks stack: problem selection ensures you're working on something valuable, requirements gathering captures what success looks like, build-versus-buy decisions clarify ownership, and implementation selection matches technical approach to what you've learned. Skip the early steps and implementation selection becomes guesswork. Do them well and the right technical approach often becomes obvious.


Need help selecting the right implementation approach for your automation initiative? Schedule a conversation to discuss your specific requirements.