Jan 2025 • 7 min read
LangChain Expression Language (LCEL): The Declarative Way
Master LCEL's declarative approach to building composable, performant AI pipelines with automatic optimization.
What is LCEL?
LangChain Expression Language (LCEL) is used to connect AI building blocks like prompts, models, data retrievers and parsers by using a "pipe" symbol (|) so that information flows smoothly from one part to another. It takes a declarative approach to building new Runnables from existing Runnables.
The Pipe Paradigm
Think of LCEL like Unix pipes for AI workflows:
prompt | model | parser
Data flows left to right, each component transforms the input and passes it along.
Why LCEL?
Composable Pipelines
Connect retrievers, models, and parsers like Lego blocks. LCEL enables you to build complex workflows from simple, reusable components. Each component is a "Runnable" that can be combined with others.
Performance Optimization
LCEL provides optimized execution including parallelism and lazy evaluation. The framework automatically identifies opportunities to run independent steps in parallel, dramatically reducing latency for complex chains.
Consistency
The same chain works with sync, async, streaming, or batch execution. Write your chain once, and LCEL handles the different execution modes automatically. No need to write separate code for streaming vs batch processing.
Maintainability
Declarative design is easier to extend and debug. LCEL chains read like a description of what you want to happen, not how to make it happen. This makes code reviews easier and reduces bugs.
Automatic Observability
As chains get more complex, with LCEL, all steps are automatically logged to LangSmith for maximum observability and debuggability. You get complete visibility into every step of execution without manual instrumentation.
What You Get for Free
- Automatic tracing of all chain steps
- Input and output logging for each component
- Latency measurement per step
- Token usage tracking
- Error propagation with context
When to Use LCEL
While users can run chains with hundreds of steps in production, LangChain recommends using LCEL for simpler orchestration tasks. LCEL excels at:
- Linear or mostly-linear workflows
- Workflows with simple branching logic
- RAG pipelines: retrieve → format → generate → parse
- Multi-step generation with consistent flow
When to Use LangGraph Instead
When the application requires complex state management, branching, cycles or multiple agents, LangChain recommends using LangGraph. Specifically, choose LangGraph when you need:
- Cyclic workflows (loops and retries)
- Complex conditional branching
- Persistent state across steps
- Multi-agent coordination
- Human-in-the-loop workflows
Core Concepts
Runnables
Everything in LCEL is a Runnable: prompts, models, retrievers, parsers, and chains themselves. This uniform interface enables seamless composition. Every Runnable supports:
- invoke(): Synchronous execution
- ainvoke(): Async execution
- stream(): Streaming results
- batch(): Process multiple inputs
Pipes and Composition
The pipe operator (|) chains Runnables together. Output from the left becomes input to the right. LCEL handles type compatibility and data transformation automatically.
Parallel Execution
Use RunnableParallel to execute multiple chains simultaneously and combine results. Perfect for scenarios like retrieving from multiple sources or getting responses from different models.
Common Patterns
RAG Chain
The classic RAG pattern in LCEL: retriever → format context → prompt → model → parse. Clean, readable, and performs well even with complex retrievers.
Sequential Processing
Chain multiple LLM calls where each refines the output of the previous one. LCEL makes multi-step generation patterns simple and maintainable.
Conditional Routing
Use RunnableBranch to route based on input or intermediate results. While not as powerful as LangGraph's conditional edges, it handles simple branching elegantly.
Important Note: JavaScript Deprecation
The current JavaScript LCEL docs will be deprecated and no longer maintained with the release of LangChain v1.0 in October 2025. JavaScript users should plan migration strategies or stick with LangChain v0.x long-term.
Best Practices
Keep Chains Focused
Each chain should do one thing well. Compose larger workflows from smaller, focused chains rather than creating monolithic chains.
Use Type Hints
Python type hints help LCEL validate chain composition at design time. Catch type mismatches before runtime.
Leverage Parallelism
When steps don't depend on each other, use RunnableParallel. The performance gains can be substantial.
Monitor in LangSmith
Even though logging is automatic, regularly review traces in LangSmith to identify bottlenecks and optimize performance.
Sources
This article was generated with the assistance of AI technology and reviewed for accuracy and relevance.