Service Fabric is a terrific platform for orchestrating your Microservices. It provides many features like Service Discovery, Fault Tolerance, Reverse Proxy etc., out of the box, making it extremely easy to manage your Microservices.
Unlike other orchestrators like Kubernetes, it has a very rich developer tool-set and if your services are developed using .NET Core or other Azure Services, lot of things work out of the box.
Microsoft provides ARM templates to easily create a secure cluster, complete with creating Virtual Networks, configuring Subnets, Network Security Groups, Load Balancers etc ., If it's a stand alone cluster, isolated from everything else, its ridiculously easy to get up and running quickly.
However, most projects are not green field, meaning you don't have the luxury of creating a new Virtual Network for every cluster you provision. Its not feasible and probably not desirable too. Your organization might have already other resources that are deployed to existing Virtual Networks and your new Microservices might have to be deployed to the same VNET to communicate securely with other services or databases.
Things can get complicated if you have legacy web applications or APIs that you want to containerize and deploy to the same cluster, to take advantage of Service Fabric's capabilities.
Service Fabric is a very extensible platform and provides great flexibility in configuring your cluster using Azure Resource Manager (ARM). ARM is the deployment and management service for Azure resources. It provides declarative way to create, update and delete resources in your Azure Subscription via ARM Templates written using JSON.
Microsoft provides some templates to cover few scenarios in their GitHub repo. We can take the one that closely matches to our requirements and then customize them. Most of these templates didn't cover any of our specific needs like deploying to the existing VNET with support for Containers. What makes it complicated is, with containers, you'll get multiple network interfaces on your Virtual Machine and you will have to modify the ARM template to make sure the correct NIC is used for cluster communication.
One of the most important thing to understand here is, when you're deploying resources to the VNET, you also need to choose a subnet that you're resources will be deployed to. So, in the arm template, we'll have to identity the final subset the resources will be deployed to, construct it in the ARM template, and provide it to the Virtual Machine Scaleset.
Add these three parameters in the parameter file.
Since you're leverage existing VNET, you don't want your ARM Template to create another VNET. Let's comment out the following lines.
You also don't want your Virtual Machine Scaleset to depend on any Virtual Network. It can be provisioned before creating any VNET. In the Microsoft.Compute/virtualMachineScaleSets section, comment out the following line.
Those changes should be enough to get your SF cluster deployed to existing VNET. However, if you'd like to deploy your VM that has support for Windows Containers, you'll have to make few more changes.
By adding nicPrefixOverride, you're making sure correct NIC is used for cluster communication.
That's it. When you deploy this to your azure subscription, you'll get a secure cluster that's deployed to your existing VNET and has Windows 2016 Datacenter OS with support for Windows Containers. You'll be able to deploy green field Microservices that are developed using Service Fabric's SDK or you can take your legacy applications, containerize them and deploy them to Windows Containers on the same cluster. It's truly best of both the worlds.
The full code can be found here.
—Preetham Reddy, Cloud Solutions Architect at Tech Fabric
Tech Fabric specializes in building web, mobile and cloud based application using Microsoft Stack (C#, .NET Core, Xamarin, Azure, SQL Server etc.,). If you need help with taking your on-premise application to cloud or convert your monolithic applications to microservices based, we’d be glad to help you out.You can reach out to our sales team at [email protected]