In this series of documentation, will be covering the logical architecture of ASDK (Azure Stack Development Kit) and its components.
When ASDK deployed, the following VM's are bought up on the Hyper-V Host.
As of GA, Azure Stack consists of 13 VMs that all have their functions to make ASDK work. All of them are Core server instances configured with static resources (up to 8 GB of RAM and 4 vCPUs each). For the multi node environments most of these VMs are redundant and load balanced using the Software Load Balancer (SLB).
AZS-ACS01 VM host the Azure Stack storage resource provider service. It is one of the important pillars to providing resources within Azure Stack. As the underlying technology is SSD (Storage Spaces Direct).
If new resources get created in tenant, it adds the storage account to a resource group. Once storage account gets added to resources, then storage account manages different storage types service on the physical host, such as BLOB, Page, table, SOFS, ReFS cluster shared volumes, virtual disks and SSD.
AZS-ADFS01 VM responsible for the authentication and authorization model for Azure Stack. Consider, if you are in the disconnected scenario, and your deployment doesn't rely on the Azure AD, there needs to be a feature to support authentication and authorization from your in-house AD.
AZS-SQL01 VM host the SQL Service for Azure Stack. In Azure Stack, their lots of services used to run which need to be store data such as offers, tenant plans, ARM templates etc.
AZS-CA01 VM runs the certificate authority services for deploying and controlling certificates for authentication within Azure Stack. As all communication is secured using certificates, this service is mandatory and needs to work properly. Each certificate will be refactored once every 30 days, completely in the Azure Stack management environment.
AZS-DC01 VM host the DC role for Azure Stack environment. This VM responsible for all internal authentication and authorization for services. As there is no other DC available, it handle's the FSMO roles and global catalog too. This VM also runs the DHCP and DNS services for Azure Stack environment.
AZS-ERCS01 VM helps us in criticality incase any issue get occur in Azure Stack itself (known as break the cloud scenario). It helps us to take support from Microsoft and provide the possibility to connect to an Azure Stack deployment using Just Enough Administration (JEA) and Just in Time Administration (JIT) externally.
AZS-GWY01 VM is responsible for site-to-site VPN connections of tenant networks in order to provide in-between network connectivity. It's one of the important VM's for tenant connectivity.
AZS-NC01 is responsible for network controller services. The network controller works based on the SDN capabilities of Windows Server 2016. It is the central control plane for all networking stuff and provides network fault tolerance, and it's the magic key to bringing your own address space for IP addressing (VxLAN and NVGRE technology is supported, but VxLAN is the prioritized one).
In addition, it is responsible for all VMs that are part of the networking stack of Azure Stack:
AZS-SLB01 VM responsible for all load balancing. It is responsible for tenant load balancing but also provides high availability for Azure Stack infrastructure services. As expected, providing the SLB to Azure Stack cloud instances means deploying the corresponding ARM template. The underlying technology is load balancing based on hashes. By default, a 5-tuple hash is used, and it contains the following:
AZS-WASP01 VM is responsible for the Azure Stack tenant portal and running the Azure Resource Manager (ARM) services for it
AZS-WAS01 VM is responsible to run the portal and your Azure Resource Manager instance. ARM is your instance responsible for the design of the services provided in your Azure Stack instance. ARM will make sure that the resources will be deployed the way you design your templates and that they will be running the same throughout their lifecycle.
AZS-XRP01 is responsible for the core resource provider: compute, storage, and networks. It holds the registration of these providers and knows how they interact with each other.
AZS-BGPNAT01 VM provides NAT and VPN access based on the BGP routing protocol, which is the default for Azure, too. This VM does not exist in multi node deployments and is replaced by the TOR switch (TOR stands for Top of the Rack). As a tenant is able to deploy a VPN device in its Azure Stack-based cloud, and connect it to another on or off premises networking environment, all traffic goes through this VM.
As you have seen, these VMs provide the management environment of Azure Stack, and they are all available only in one instance. But we all know that scalability means deploying more instances of each service, and as we already have a built-in software load balancer, the product itself is designed for scale. Another way to scale is to implement another Azure Stack integrated system in your environment and provide a place familiar to Azure users.
In the upcoming article, we will be talking about Azure Stack core management services. Happy learning and looking for feedback here.