This simple demo shows possible integration between two systems (system A and system B) using Azure Functions.
Note
The full example with all resources could be found
here on GitHub.
The architecture constraints:
All updates from the system A must be transfered into the system B The system A is listening on HTTP with REST API The system B is listening on HTTP with REST API The system B is not fully compatible in message definitions, so field mapping must be used The mapping must be saved in DB (I chose CosmosDB) Due to missing push notification in the system A, its API must be periodically checked The system B may be occasionally offline, so some type of persistent bus should be used.
Overall architecture
Setup environments:
# Create a required storage account for application code
az storage account create --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --resource-group $RG --location $LOCATION
# Create function app
az functionapp create --consumption-plan-location $LOCATION \
--runtime python \
--runtime-version 3.8 \
--functions-version 3 \
--name $FUNC_APP_NAME \
--os-type linux \
--resource-group $RG \
--storage-account $STORAGE_ACCOUNT_NAME
# Create service bus namespace
az servicebus namespace create --resource-group $RG --name $SERVICEBUS_NAMESPACE --location $LOCATION --sku Standard
# Subscription model will be used, so create topic
az servicebus topic create --name system-a-in \
--resource-group $RG \
--namespace-name $SERVICEBUS_NAMESPACE
# Create a required subscription
az servicebus topic subscription create --resource-group $RG --namespace-name $SERVICEBUS_NAMESPACE --topic-name system-a-in --name system-a-in-subscription
Deploy and setup Cosmos DB az cosmosdb create \
--name $COSMOSDB_ACCOUNT_NAME \
--kind GlobalDocumentDB \
--resource-group $RG
Create Database and Collection in the Cosmos DB:
# Get Key
COSMOSDB_KEY = $( az cosmosdb keys list --name $COSMOSDB_ACCOUNT_NAME --resource-group $RG --output tsv | awk '{print $1}' )
# Create Database
az cosmosdb database create \
--name $COSMOSDB_ACCOUNT_NAME \
--db-name $DATABASE_NAME \
--key $COSMOSDB_KEY \
--resource-group $RG
# Create a container for customers (System B)
# Create a container with a partition key and provision 400 RU/s throughput.
COLLECTION_NAME = "customers"
az cosmosdb collection create \
--resource-group $RG \
--collection-name $COLLECTION_NAME \
--name $COSMOSDB_ACCOUNT_NAME \
--db-name $DATABASE_NAME \
--partition-key-path /id \
--throughput 400
# Create a container for mapping
COLLECTION_NAME = "mapping"
az cosmosdb collection create \
--resource-group $RG \
--collection-name $COLLECTION_NAME \
--name $COSMOSDB_ACCOUNT_NAME \
--db-name $DATABASE_NAME \
--partition-key-path /id \
--throughput 400
Insert the following json into mapping
container:
{
"id" : "SYSTEMB" ,
"new_fields" : [
{
"input_fields" : [
"first_name" ,
"last_name"
],
"separator" : " " ,
"output_fields" : [
"full_name"
],
"operation" : "CONCAT"
},
{
"input_fields" : [
"address_line2"
],
"separator" : "/" ,
"output_fields" : [
"city" ,
"postal_code"
],
"operation" : "SPLIT"
}
],
"delete_fields" : [
"first_name" ,
"last_name"
]
}
func azure functionapp publish $FUNC_APP_NAME
Basic overview of deployed functions:
Functions in skl-func-app:
bus-to-system-b - [serviceBusTrigger]
system-a - [httpTrigger]
Invoke url: https://skl-func-app.azurewebsites.net/api/system-a
system-b - [httpTrigger]
Invoke url: https://skl-func-app.azurewebsites.net/api/system-b
system-a-to-bus - [timerTrigger]
Debugging and testing When all functions starts, two endpoint will be created:
Functions:
system-a: [GET,POST] http://localhost:7071/api/system-a
system-b: [POST] http://localhost:7071/api/system-b
bus-to-system-b: serviceBusTrigger
system-a-to-bus: timerTrigger
URL http://localhost:7071/api/system-a
simulates the REST API of the System A and http://localhost:7071/api/system-b
simulates the REST API of the Bystem B.
Log stream in console is not currently supported in Linux Consumption Apps.
func azure functionapp logstream $FUNC_APP_NAME --browser
When testing finished, all resources could be deleted using this command:
az group delete --name $RG