Traffic Node Configuration for Atlas Clusters
When adding a traffic node to your clustered environment, you define the name of that node as well as its canonical hostname. Also, you may associate specific networks with the traffic node. This essentially predetermines a client’s traffic node selection based on its network prefix mask. This type of configuration is more relevant for administrators who are deploying a clustered Bomgar site in a WAN environment.
One last configuration option available when initially defining a traffic node is the time zone offset of the appliance. This must be set if you plan to use the time zone offset method (which is discussed in more detail below) for clients deciding which traffic node to connect to.
After defining traffic nodes in your environment, you can decide on what process clients will use to determine which traffic node to connect to. Bomgar administrators have the following options to choose from:
|Time Zone Offset||The time zone offset process involves detecting the time zone setting of the client machine and using that setting to match the nearest available traffic node. The time zone offset is derived from the client machine’s time zone setting relative to Coordinated Universal Time (UTC). The time zone offset method is good for testing and can be used in production; however, a DNS-based solution would be a preferable method in a production environment.|
|IP Anycast||The IP Any Cast method uses a shared IP address among all traffic nodes and relies on the network infrastructure to return the “closest” traffic node to the client. If you are part of an organization that already has a global content delivery network in place, this may be a preferable option for you. IP Any Cast is a very robust solution but can be more complicated to implement and maintain. However, if you have this type of infrastructure in place, this will be your best method for customer and representative client traffic node selection.|
|A Record||The A record method instructs clients to attempt to connect to a specified (shared) hostname and rely on the DNS configuration to return the appropriate IP address of the traffic node for connection. This method can be utilized within an environment where you have complete control of the DNS resources that all of your customers will be using. Also, there are third party DNS providers that can provide this service for you. With this method, you could have an A record defined for trafficnodepicker.example.com. For your customers in the US who use DNSserver01, the A record points to IP address 18.104.22.168. For your customers in Europe who use DNSserver02, the A record for trafficnode01.example.com resolves to 22.214.171.124.|
|Random||The random method randomly chooses which nodes a client will connect to. This method will most likely be used if you have taken the time to accurately define all the network prefixes for each respective traffic node. If a client’s network doesn’t match any of the predefined networks on any of the participating traffic nodes, then the client will be assigned a random traffic node at the discretion of the master node. This method is simple and inexpensive, and it enables you to rely on the network prefix defined for each traffic node. However, if your clustered environment spans multiple regions or the globe and your network prefixes are left undefined, this method could yield less than desirable results.|
The SRV record method is very similar to the A record process in that traffic node selection relies on the underlying DNS infrastructure to determine which node to connect to. The main difference between the two methods is that SRV records have the ability to assign a weight and priority to a specific host entry. The advantage that this gives you is a method for providing load balancing and backup service at the network level.
Note that this method requires that you have control over the DNS infrastructure that your clients will be using. If you are deploying in a WAN environment, the use of SRV records is probably already a common practice which you can leverage to provide an extra layer of redundancy and load balancing to your clustered Bomgar environment.