This solution demonstrates the power of combining multiple Azure services to create sophisticated deployment patterns. As Azure Functions Flex Consumption matures, this approach provides a solid foundation that can be easily adapted when native deployment slots become available.
The Challenge: Deployment Slots Without Slots
Azure Functions on the Consumption plan has long provided deployment slots, enabling seamless blue-green deployments and canary releases. However, with the introduction of Azure Functions Flex Consumption – a new hosting model that offers improved scaling and cost optimization – deployment slots are not yet available at the time of writing this article.
This presents a significant challenge for organizations that have adopted progressive deployment practices and need to maintain zero-downtime deployments while taking advantage of the benefits that Flex Consumption offers.
The Solution: API Management as the Traffic Router
This repository demonstrates an approach to achieving progressive deployments for Azure Functions Flex Consumption by leveraging Azure API Management (APIM) backend pools as the traffic routing mechanism. Instead of relying on native deployment slots, we architect a solution that uses two separate function apps and orchestrates traffic distribution through APIM.
Architecture Overview

The solution consists of:
- Two Azure Functions Flex Consumption Apps: A “Green” and “Blue” deployment
- Azure API Management: Acts as the traffic router with backend pools
- Azure DevOps Pipeline: Orchestrates the deployment and traffic shifting process
- Infrastructure as Code: Bicep templates for reproducible deployments
Infrastructure Implementation
Function Apps Deployment
The foundation of our progressive deployment strategy lies in provisioning two identical Azure Functions Flex Consumption apps that serve as our Blue and Green environments. This approach ensures we always have a stable production environment while allowing us to deploy and test new versions in isolation.
Why Two Function Apps?
Since Azure Functions Flex Consumption doesn’t currently support deployment slots, we create two separate function apps to simulate the slot behavior. Each app serves as a complete, independent deployment target with its own:
- Compute resources and scaling configuration
- Storage containers for deployment packages
- Application Insights telemetry
- Runtime environment settings
Shared Infrastructure Benefits
Both function apps share certain infrastructure components to optimize costs and maintain consistency:
- Storage Account: A single storage account hosts deployment packages for both apps in separate containers
- Application Insights: Shared monitoring instance allows for unified observability and comparison between deployments
- Log Analytics: Centralized logging enables comprehensive analysis across both environments
The main Bicep template (main.bicep) provisions both Green and Blue function apps with identical configurations:
// Azure Functions Flex Consumption - Green
module greenFunctionApp 'core/host/function.bicep' = {
name: 'greenfunctionapp'
scope: functionResourceGroup
params: {
location: location
tags: tags
planName: !empty(greenFxPlanName) ? greenFxPlanName : '${abbrs.webServerFarms}${resourceToken}${shortGuid}'
appName: greenAppName
storageAccountName: storage.outputs.name
deploymentStorageContainerName: greenFxDeploymentStorageContainerName
applicationInsightsName : monitoring.outputs.applicationInsightsName
functionAppRuntime: functionAppRuntime
functionAppRuntimeVersion: functionAppRuntimeVersion
maximumInstanceCount: maximumInstanceCount
}
}
// Azure Functions Flex Consumption - Blue
module blueFunctionApp 'core/host/function.bicep' = {
name: 'bluefunctionapp'
scope: functionResourceGroup
params: {
location: location
tags: tags
planName: !empty(blueFxPlanName) ? blueFxPlanName : '${abbrs.webServerFarms}${resourceToken}${shortGuid}'
appName: blueAppName
storageAccountName: storage.outputs.name
deploymentStorageContainerName: blueFxDeploymentStorageContainerName
applicationInsightsName : monitoring.outputs.applicationInsightsName
functionAppRuntime: functionAppRuntime
functionAppRuntimeVersion: functionAppRuntimeVersion
maximumInstanceCount: maximumInstanceCount
}
}
Configuration Considerations
The maximumInstanceCount parameter is crucial for Flex Consumption apps as it controls the scaling behavior. Setting this appropriately ensures that both environments can handle production traffic when needed, while also managing costs during idle periods. The shared Application Insights instance allows for unified monitoring and comparison of performance metrics between the two deployments.
2. API Management Backend Configuration
Azure API Management backends serve as the abstraction layer between our API and the underlying function apps. In our progressive deployment scenario, each function app is registered as a separate backend, allowing APIM to route traffic independently to either the Green or Blue deployment.
Backend Security and Authentication
Each backend is configured with proper security credentials to authenticate with the Azure Functions. The solution uses function keys for authentication, which are automatically retrieved from the function app deployment and securely stored in the backend configuration. This ensures that APIM can authenticate with the function apps without exposing sensitive credentials. However, in more controled environment, you could you Entra based authentication as well, by addjusting the API policy.
SSL/TLS Configuration
The backend configuration includes SSL/TLS validation settings to ensure secure communication between APIM and the function apps. The validateCertificateChain and validateCertificateName parameters ensure that all communications are properly encrypted and verified, maintaining security standards for production deployments.
Backend Naming Convention
The backends are named using a clear convention (green-backend and blue-backend) that makes it easy to identify which environment each backend represents. This naming strategy is crucial for the traffic shifting logic, as it allows the deployment scripts to programmatically determine which backend is currently active.
Each function app is registered as a backend in APIM using the following configuration:
module apimGreenBackend 'core/apim/backend.bicep' = {
name: 'green-backend'
scope: apimResourceGroup
params: {
apimServiceName: apimServiceName
backendName: 'green-backend'
backendDescription: 'Green Backend service for API Management'
backendUrl: greenFunctionApp.outputs.functionUri
backendProtocol: 'http'
functionKey: greenFunctionApp.outputs.functionKey
backendTitle: greenFunctionApp.outputs.functionKey
validateCertificateChain: true
validateCertificateName: true
}
}
module apimBlueBackend 'core/apim/backend.bicep' = {
name: 'blue-backend'
scope: apimResourceGroup
params: {
apimServiceName: apimServiceName
backendName: 'blue-backend'
backendDescription: 'Blue Backend service for API Management'
backendUrl: blueFunctionApp.outputs.functionUri
backendProtocol: 'http'
functionKey: blueFunctionApp.outputs.functionKey
backendTitle: blueFunctionApp.outputs.functionKey
validateCertificateChain: true
validateCertificateName: true
}
}
Dynamic Backend URLs
The backend URLs are dynamically generated from the function app outputs, ensuring that the APIM backends always point to the correct function app endpoints. This dynamic configuration is essential for Infrastructure as Code practices, as it eliminates the need for manual URL configuration and reduces the risk of configuration errors.
3. Backend Pool for Traffic Distribution
The backend pool is the heart of our progressive deployment strategy. It represents a revolutionary approach to traffic management in Azure API Management, allowing us to distribute incoming requests across multiple backends based on configurable weights. This feature transforms APIM from a simple gateway into a sophisticated load balancer capable of supporting complex deployment patterns.
Understanding Backend Pools
A backend pool aggregates multiple individual backends into a single logical unit. Instead of routing all traffic to a single backend, the pool can intelligently distribute requests based on weights assigned to each backend. This weight-based distribution is what enables our canary and blue-green deployment strategies.
Weight-Based Traffic Distribution
The weights array in the configuration determines the percentage of traffic each backend receives. For example, weights of [90, 10] means 90% of traffic goes to the first backend (Green) and 10% to the second backend (Blue). This granular control allows for precise traffic shifting during deployments.
Priority vs. Weight Configuration
Each backend in the pool has both a priority and weight setting. The priority determines the order of preference when backends are available, while the weight determines the traffic distribution among backends with the same priority. In our implementation, both backends have the same priority (1), so traffic distribution is purely based on weights.
The backend pool configuration enables sophisticated traffic management:
module backendPool 'core/apim/backend-pool.bicep' = {
name: backendPoolName
scope: apimResourceGroup
params: {
apimServiceName: apimServiceName
backendPoolName: backendPoolName
backendIds: [
apimGreenBackend.outputs.backendId
apimBlueBackend.outputs.backendId
]
weights: [100, 0] // Initially, all traffic goes to Green
}
}
The backend pool implementation (core/apim/backend-pool.bicep) uses the preview API to create a weighted pool:
resource backendPool 'Microsoft.ApiManagement/service/backends@2024-06-01-preview' = {
name: backendPoolName
parent: apimService
properties: {
description: backendPoolDescription
type: 'Pool'
pool: {
services: [
for i in range(0, length(backendIds)): {
id: backendIds[i]
priority: 1
weight: weights[i]
}
]
}
}
}
Dynamic Reconfiguration
One of the key advantages of this approach is the ability to dynamically reconfigure traffic distribution without disrupting active connections. When weights are updated, APIM gradually shifts new requests to match the new distribution, ensuring smooth transitions during deployments.
4. API Policy Configuration
API policies in Azure API Management serve as the request processing pipeline, defining how incoming requests are handled, transformed, and routed to backend services. In our progressive deployment scenario, the policy configuration is elegantly simple yet powerful, demonstrating how complex traffic routing can be achieved with minimal policy code.
CORS Configuration
The policy includes comprehensive CORS (Cross-Origin Resource Sharing) configuration to enable web applications from different domains to access the API. This is crucial for modern web applications that often have frontend and backend services deployed on different domains. The configuration allows all origins, methods, and headers, making it suitable for development and testing environments.
Backend Service Routing
The most critical aspect of the policy is the set-backend-service directive, which routes all incoming requests to our backend pool. This single line of configuration is what enables the entire progressive deployment strategy. By pointing to green-blue-pool, all requests are automatically distributed according to the weights configured in the backend pool.
Request Processing Flow
When a request arrives at the API:
- CORS headers are processed and added to the response
- The request is routed to the
green-blue-poolbackend - The backend pool distributes the request based on current weights
- The response is processed and returned to the client
The API policy configuration demonstrates the power of APIM’s declarative approach:
<policies>
<inbound>
<cors allow-credentials="false">
<allowed-origins>
<origin>*</origin>
</allowed-origins>
<allowed-methods preflight-result-max-age="300">
<method>*</method>
</allowed-methods>
<allowed-headers>
<header>*</header>
</allowed-headers>
<expose-headers>
<header>*</header>
</expose-headers>
</cors>
<base />
<set-backend-service backend-id="green-blue-pool" />
</inbound>
<backend><base /></backend>
<outbound><base /></outbound>
<on-error><base /></on-error>
</policies>
Policy Flexibility
This policy configuration can be easily extended to include additional functionality such as request throttling, authentication, request/response transformation, or custom logging. The modular nature of APIM policies allows for sophisticated request processing while maintaining the core routing functionality.
Progressive Deployment Process
1. Initial State Management
State management is crucial for any progressive deployment system, as it needs to track which environment is currently serving production traffic. Our solution uses APIM named values as a simple yet effective state store that can be easily queried and updated during deployment processes.
Named Values as State Store
APIM named values provide a secure and accessible way to store configuration data that can be accessed by both policies and external systems. Unlike traditional databases or storage accounts, named values are immediately available to APIM policies and can be queried using the Azure CLI without additional authentication complexity.
Current Slot Tracking
The current-slot-name named value serves as the single source of truth for determining which backend is currently serving production traffic. This information is essential for the traffic shifting logic, as it determines which backend should receive the new deployment and which direction traffic should be shifted.
Atomic State Updates
The state management approach ensures atomic updates during deployments. When traffic is shifted completely (100%) to a new backend, the named value is updated to reflect the new active slot. This atomic update prevents confusion during deployment processes and ensures consistent state across all components.
Fallback and Error Handling
The initial state configuration provides a clear starting point for the system. By defaulting to ‘green-backend’ as the initial active slot, the system has a predictable starting state that can be relied upon during the first deployment cycle.
The solution uses an APIM named value to track the current active slot:
module slotMarker 'core/apim/slot-marker.bicep' = {
name: 'slot-marker'
scope: apimResourceGroup
params: {
apimServiceName: apimServiceName
currentSlotNameKey: 'current-slot-name'
currentSlotName: 'green-backend'
}
}
Integration with Deployment Scripts
The named value is queried by the deployment scripts to determine the current state and calculate the appropriate traffic distribution. This integration between infrastructure configuration and deployment automation ensures that the system remains self-consistent across deployment cycles.
2. Azure DevOps Pipeline
The Azure DevOps pipeline serves as the orchestration engine for our progressive deployment strategy. It coordinates the entire deployment lifecycle, from code compilation to traffic shifting, ensuring that each step is executed in the correct order with proper validation and error handling.
Multi-Stage Pipeline Design
The pipeline is designed with multiple stages that represent different phases of the deployment process, though this is a design choice that can be adapted based on your specific requirements. You could opt for a single stage with multiple jobs, or even combine certain operations into fewer stages. However, the multi-stage approach offers several distinct advantages beyond just organizational clarity.
Each stage introduces a natural “bake time” – a period where the newly deployed code can stabilize and warm up before receiving production traffic. This bake time is particularly valuable for Azure Functions Flex Consumption, as it allows the function app to initialize, perform any startup operations, and reach an optimal performance state. This eliminates the common “cold start” issues that users might experience when traffic is immediately shifted to a newly deployed backend.
The staged approach also provides natural integration points for health checks and validation processes. During the bake time between deployment and traffic shifting, automated health checks can verify that the new deployment is functioning correctly, database connections are established, and dependencies are accessible. This validation period significantly reduces the risk of routing traffic to a problematic deployment.
Furthermore, the separation of concerns makes the pipeline more maintainable and allows for different approval gates, monitoring strategies, and rollback procedures at each phase. Teams can implement manual approvals before critical traffic shifts, automated quality gates, or custom validation logic specific to each deployment phase.
Deployment Target Selection
The deployment stage intelligently determines which function app (Blue or Green) should receive the new deployment based on the current active slot. This automated selection reduces the risk of human error and ensures consistent deployment practices across all environments.
Progressive Traffic Shifting
The pipeline implements a three-phase traffic shifting strategy:
- Canary (10%): Limited exposure to validate basic functionality
- Ring-A (50%): Broader exposure to test performance and stability
- General Availability (100%): Full deployment with complete traffic cutover
Each phase includes validation steps and can be configured with approval gates to ensure human oversight of critical deployment decisions.
The deployment pipeline (azure-pipelines/deploy-code-changes.yml) orchestrates the entire process:
stages:
- stage: BuildCode
displayName: 'Build Code (Golang) and Create Artifact'
# Build the Go application and create deployment artifacts
- stage: DeployCode
displayName: 'Deploy Code to Azure Function App'
# Deploy to the inactive slot (determined by current-slot-name)
- stage: CanaryDeployment
displayName: 'Canary Deployment - Traffic Shift'
# Shift 10% of traffic to the new version
- stage: RingADeployment
displayName: 'Ring-A Deployment - Traffic Shift'
# Shift 50% of traffic to the new version
- stage: GADeployment
displayName: 'General Available - Deployment - Traffic Shift'
# Shift 100% of traffic to the new version
Error Handling and Rollback
Each stage includes comprehensive error handling and can be configured to automatically rollback changes if issues are detected. The pipeline maintains logs and artifacts that can be used for troubleshooting and audit purposes.
A particularly powerful enhancement to this deployment strategy would be the implementation of health checks before traffic shifting. By incorporating a health endpoint in your function applications, you can validate that a newly deployed backend is functioning correctly before directing any production traffic to it. This health check could verify database connectivity, external service availability, and basic functionality tests. If the health check fails, the pipeline could be configured to immediately abandon the deployment and maintain 100% traffic on the current stable version, effectively providing an automatic rollback mechanism.
The health check integration would involve adding a validation step after the code deployment but before the canary traffic shift. This step would call the health endpoint on the newly deployed backend and only proceed with traffic shifting if the health check passes. This approach significantly reduces the risk of exposing users to broken deployments and provides an additional safety net beyond basic deployment success indicators.
While this health check implementation is not covered in this article, it represents a valuable exercise for readers who want to enhance the robustness of their progressive deployment strategy. The existing pipeline structure can be easily extended to include such validation steps.
Security and Authentication
The pipeline uses service principal authentication to interact with Azure resources, ensuring that all operations are performed with appropriate permissions and are fully auditable. Secrets are managed through Azure DevOps variable groups and are never exposed in logs or artifacts.
3. Traffic Shifting Logic
The traffic shifting logic is the brain of our progressive deployment system. It intelligently determines how to distribute traffic between the Blue and Green environments based on the current state and the desired traffic percentage, ensuring smooth transitions and maintaining system stability throughout the deployment process.
Intelligent Direction Detection
The script automatically determines which direction traffic should flow based on the current active slot. This bidirectional capability means the system can handle deployments regardless of which environment is currently serving production traffic. The logic ensures that new deployments always go to the inactive slot, preventing accidental overwrites of production code.
Percentage-Based Traffic Distribution
The script accepts a traffic percentage parameter that represents how much traffic should be shifted to the new deployment. The system automatically calculates the remainder percentage for the current production environment. This approach provides intuitive control over traffic distribution and makes it easy to implement various deployment strategies.
State Synchronization
When traffic reaches 100% on the new deployment, the script automatically updates the APIM named value to reflect the new active slot. This synchronization ensures that subsequent deployments will correctly identify the current production environment and deploy to the appropriate inactive slot.
Validation and Error Handling
The script includes comprehensive validation to ensure that:
- Traffic percentage is within valid bounds (0-100)
- The current slot can be successfully retrieved from APIM
- Backend pool updates are applied successfully
- State updates are committed only when appropriate
Rollback Capabilities
The bidirectional nature of the traffic shifting logic means that rollbacks can be performed by simply running the script with appropriate parameters. For example, if a deployment at 50% traffic shows issues, running the script with 0% will immediately shift all traffic back to the previous version.
The traffic shifting script (scripts/traffic-shift.sh) intelligently manages the traffic distribution:
# Get the current slot name from APIM named value
CURRENT_SLOT=$(az apim nv show -g $APIMRG -n $APIM --named-value-id current-slot-name --query value -o tsv)
# Determine traffic weights based on current slot and provided percentage
if [ "$CURRENT_SLOT" = "green-backend" ]; then
# Current is green, so shift specified percentage to blue (new deployment)
BLUE_WEIGHT=$TRAFFIC_PERCENTAGE
GREEN_WEIGHT=$REMAINDER_PERCENTAGE
else
# Current is blue, so shift specified percentage to green (new deployment)
BLUE_WEIGHT=$REMAINDER_PERCENTAGE
GREEN_WEIGHT=$TRAFFIC_PERCENTAGE
fi
# Update the backend pool with new weights
az deployment sub create --location westeurope --template-file traffic-shifting.bicep \
--parameters apimResourceGroupName=$APIMRG \
--parameters apimServiceName=$APIM \
--parameters blueWeight=$BLUE_WEIGHT \
--parameters greenWeight=$GREEN_WEIGHT
Idempotent Operations
The script is designed to be idempotent, meaning it can be run multiple times with the same parameters without causing adverse effects. This characteristic is crucial for automated deployment systems where retries might be necessary due to temporary failures or network issues.
4. Traffic Shifting Bicep Template
The traffic shifting Bicep template represents the Infrastructure as Code approach to dynamic traffic management. Unlike traditional deployment scripts that might use imperative commands, this template provides a declarative way to update backend pool configurations, ensuring consistency and repeatability across deployments.
Declarative Configuration Management
The template uses a declarative approach to define the desired state of the backend pool. This means that regardless of the current configuration, the template will ensure that the backend pool ends up with the specified weights. This approach is more reliable than imperative scripts that might fail if the current state doesn’t match expectations.
Parameter-Driven Flexibility
The template accepts parameters for both blue and green weights, allowing for any traffic distribution scenario. This flexibility supports various deployment strategies beyond simple blue-green deployments, including A/B testing scenarios where traffic might be split at different ratios for extended periods.
Resource Reference Management
The template uses existing resource references to locate the current backend configurations. This approach ensures that the template works with the actual deployed resources rather than making assumptions about resource states. The use of existing resources also provides validation that the backends are properly configured before attempting traffic shifts.
Atomic Updates
Bicep deployments are atomic, meaning that either all changes are applied successfully or none are applied. This atomicity is crucial for traffic shifting operations, as partial updates could result in inconsistent routing behavior that might impact user experience.
Rollback and Audit Capabilities
Since the template is deployed using Azure Resource Manager, all changes are logged and can be audited. The deployment history provides a complete record of all traffic shifts, making it easy to understand the deployment timeline and rollback to previous configurations if necessary.
The traffic shifting template (traffic-shifting.bicep) updates the backend pool weights:
module backendPool 'core/apim/backend-pool.bicep' = {
name: backendPoolName
scope: apimResourceGroup
params: {
apimServiceName: apimServiceName
backendPoolName: backendPoolName
backendIds: [
apimGreenBackend.id
apimBlueBackend.id
]
weights: [greenWeight, blueWeight]
}
}
Integration with CI/CD Pipelines

The template is designed to integrate seamlessly with CI/CD pipelines. The parameter-based approach makes it easy to pass dynamic values from pipeline variables, while the declarative nature ensures that deployments are predictable and can be easily tested in non-production environments.
Deployment Scenarios
Blue-Green Deployment
- Initial State: Green (100%) | Blue (0%)
- Deploy to Blue: New version deployed to Blue function app
- Switch Traffic: Green (0%) | Blue (100%)
- Rollback if needed: Switch back to Green instantly
Canary Deployment
- Initial State: Green (100%) | Blue (0%)
- Deploy to Blue: New version deployed to Blue function app
- Canary Release: Green (90%) | Blue (10%)
- Ring-A Release: Green (50%) | Blue (50%)
- Full Release: Green (0%) | Blue (100%)
Sample Function Implementation
The repository includes a comprehensive Go-based Azure Function that serves as both a functional example and a demonstration of how to implement version identification and logging in a progressive deployment scenario. The function is designed to provide clear visibility into which version is serving requests, making it easier to validate deployment success and troubleshoot issues.
Request Logging and Traceability
The function implements comprehensive request logging that captures essential information about each incoming request. This logging is crucial for progressive deployments, as it allows operators to monitor which version is handling requests and validate that traffic distribution is working as expected. The logging includes timestamps, request methods, URLs, and Azure Functions-specific headers like the invocation ID.
Azure Functions Handler Pattern
The function follows the standard Azure Functions custom handler pattern for Go applications. It sets up an HTTP server that listens for function invocations and formats responses according to the Azure Functions runtime expectations. This pattern ensures compatibility with the Azure Functions runtime while providing full control over the request handling logic.
Production-Ready Logging
The function includes structured logging that captures important request metadata. This information is essential for monitoring and troubleshooting in production environments, especially during progressive deployments where different versions might exhibit different behavior patterns.
Response Structure Compatibility
The function carefully constructs responses that are compatible with both the Azure Functions runtime and API Management. The response includes proper HTTP status codes, content types, and body formatting that ensures seamless integration with the overall system architecture.
The sample function demonstrates version identification and proper logging:
func txHttpTriggerHandler(w http.ResponseWriter, r *http.Request) {
t := time.Now()
fmt.Printf("=== [MOIMHA] TX Request received at: %s ===\n", t.Format("2006-01-02 15:04:05"))
// Create proper HTTP response structure
httpResponse := HttpResponse{
StatusCode: 200,
Body: `{"color":"#0000FF","hello":"world","message":"Software version: 0.0.1","timestamp":"` + t.Format("2006-01-02 15:04:05") + `"}`,
Headers: map[string]interface{}{
"Content-Type": "application/json",
},
}
outputs["res"] = httpResponse
response := InvokeResponse{
Outputs: outputs,
Logs: []string{},
ReturnValue: nil,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
Customization for Different Environments
The function can be easily customized to return different information for Blue and Green deployments. For example, the color field could be set to different values (#0000FF for blue, #00FF00 for green) to provide visual indication of which environment is serving requests. This visual feedback is particularly useful during demonstrations and validation testing.
Benefits of This Approach
1. Zero-Downtime Deployments
- Traffic is gradually shifted without service interruption
- Immediate rollback capability if issues are detected
2. Flexible Traffic Management
- Support for various deployment strategies (blue-green, canary, ring-based)
- Configurable traffic percentages at each stage
3. Cost Optimization
- Leverages Flex Consumption’s cost benefits
- Only one function app needs to be “warm” at a time during steady state
4. Observability
- Both function apps can be monitored independently
- Application Insights provides detailed telemetry for each deployment
5. Infrastructure as Code
- Entire solution is defined in Bicep templates
- Version controlled and reproducible deployments
Considerations and Limitations
1. Resource Costs
- Requires two function apps instead of one
- Additional APIM costs for backend pool management
2. Complexity
- More complex than native deployment slots
- Requires careful orchestration of traffic shifting
3. State Management
- Need to manage which slot is currently active
- Careful coordination required during deployments
4. Cold Start Considerations
- The inactive function app may experience cold starts when traffic is shifted
- Consider using warming strategies for critical applications
Future Enhancements
As Azure Functions Flex Consumption evolves, we can expect:
- Native Deployment Slots: Microsoft will introduce deployment slots in future for Flex Consumption
- Enhanced APIM Features: More sophisticated traffic routing capabilities
- Integrated Monitoring: Better integration between APIM and Application Insights
- Automated Rollback: Automatic rollback based on health metrics
Conclusion
While Azure Functions Flex Consumption doesn’t currently support deployment slots, this solution demonstrates how to architect robust progressive deployments using APIM backend pools. The approach provides the flexibility and safety of blue-green and canary deployments while leveraging the cost and performance benefits of Flex Consumption.
The combination of Infrastructure as Code, automated Azure DevOps pipelines, and intelligent traffic routing creates a production-ready solution that can be adapted to various deployment scenarios and requirements.
This architecture proves that with creativity and proper tooling, we can overcome current platform limitations and continue to follow deployment best practices while adopting new Azure services.
The complete source code for this solution is available on GitHub at: https://github.com/MoimHossain/progressive-deployment-flex-consumption. Readers are encouraged to explore the implementation details, experiment with the code, and adapt it to their specific requirements.