Skip to main content

Synergy Architectural Document

Synergy Architecture

Bimser Synergy architecture basically consists of 3 Kubernetes Cluster structures;

  • The main Bimser Kubernetes Cluster where all system operations are executed,

  • Utility Kubernetes Cluster, where operations that also run in the system infrastructure but have the potential to consume a lot of resources due to their use (such as Indexing, Elastic Search, Alert Manager, Logging) are carried out,

  • And there is the User Applications Kubernetes Cluster that runs all tenant-owned projects in namespaces that are dedicated to the tenant and completely isolated from other tenant environments.

Management of these Kubernetes environments is performed by Azure Kubernetes Service (AKS). AKS removes the burden of ongoing operations and maintenance by providing upgrading and on-demand resource scaling without taking the application offline.

Docker technology, which is a container technology, is used in the application. Docker is the technology that allows applications to run as a container in the cloud or on-premise. Using containers, it enables applications to facilitate create, deployment and run.

The basic components of a Kubernetes Cluster structure are; It can be specified as Container, Pod, and Node** elements.

Synergy Architecture

Container is a structure that is isolated from each other, contains its own software, library and configuration files, and can communicate with each other through well-defined channels. These container structures are located inside pods, called container workspaces. More than one container can run in a pod. Pod structures, on the other hand, run inside Nodes, which are a machine running in Kubernetes. Nodes can be either a virtual machine (VM) or a physical machine connected to a Cluster. Each Node contains the services necessary for the pods to run and is managed by the main component.

When the system usage density increases, the scaling of the nodes is performed by AKS according to the need. A new Node with the same structure as the needed Node structure is created and the density status is managed by meeting the new requests by this Node. When the need is no longer there, the excess Nodes are closed by transferring them to other Nodes if there are Pods running in them. Likewise, the process of scaling or decreasing pods according to the state is managed by Kubernetes. This structure, which can be scaled according to the need, ensures the flexible and performance operation of the system. However, resource management is performed automatically by the system as needed.

Synergy Architecture

Within the System Cluster, Akka is used as the tool that manages the messaging of the services among themselves. Using the actor model, Akka is used to increase the level of abstraction and create scalable, flexible and responsive applications. It provides messaging by routing the incoming request to the Worker who will make that request in a way that it specifies.

At the top of the Akka messaging structure is LightHouse (LH). When each service in the system stands up, it notifies LightHouse that it is included in the build. LightHouse, on the other hand, reports this information to other services in the system. The same form of messaging happens when a service leaves the structure. Thus, thanks to LightHouse, the services in the system are aware of each other.

When a request comes from the client, this request is fulfilled by the WebAPI of the relevant service and sent to the service's Router. The router directs the work from the Actor (Worker) who will do the work to the appropriate one at that moment. A Worker is a process or thread that can do work. He is obliged to perform the work for which he is responsible.

Acre also has a scaling mechanism in itself to meet the density in case of need. Normally, Actors have 5 threads in them. When the density increases, this number of threads can be increased up to 10. Only in this way will it be replicated in the same container. Acre's scaling is limited to this. Akka cannot create new nodes or new pods. If the Actor threads are used to the fullest extent and there is still a need for expansion, Kubernetes will do the work of creating a new Worker.

Let's examine the communication steps of the basic 3 Clusters in the Synergy architectural structure between each other and with the Client and the technologies used in these steps.

Synergy Architecture

The request from the client is first fulfilled by Azure Traffic Manager. Traffic Manager is a DNS-based traffic load balancer that provides high availability and responsiveness while optimally distributing traffic to services in different Azure regions. Reduces application response time by routing traffic to traffic with the lowest network latency for the client. It also provides high availability for critical applications by monitoring endpoints and providing automatic failover when the endpoint is unavailable. It allows planned maintenance work to be carried out in the application without any interruption in the system. Ensures that traffic is routed to alternate endpoints while application maintenance is in progress.

For example; You get a major update to the system. In order for the entire system not to be affected by this update, a new Cluster is created that is the same as the existing Cluster structure. A certain percentage of incoming requests (around 10%) are routed to the new Cluster and the effects of the update are observed. Thus, in the event of a problem as a result of the update, not all requests, but only the amount of the requests routed will be affected and a controlled transition will be provided. If no problems are observed with the new Cluster, a larger percentage of incoming requests are transferred to the new Cluster. In this way, when it is determined that no problems are encountered as a result of the update, all requests from the client are transferred to the new Cluster in a controlled manner. When all requests are transferred to the new Cluster, the old Cluster is closed and the system continues to operate on the updated Cluster. Traffic Manager provides the routing structure necessary for the system to receive major updates without being stopped in this way.

The next step after Traffic Manager is the Content Delivery Network, known as the Azure Content Delivery Network (CDN). To minimize latency, CDNs store cached content on edge servers in point-of-presence (POP) locations close to end users. It manages the distribution of user requests and the delivery of content from edge servers so that less traffic is sent to the origin server.

When a file request comes from the client, DNS routes the request to the location geographically closest to the user and with the best performance. If the file does not exist in the cache of the edge servers at the find point, the discovery point requests the file from the origin server. The edge server at the find point caches the file and the file remains cached on the edge server at the find point until the time to live (TTL) specified by the HTTP headers has expired. If the source server has not specified a TTL, the default TTL is seven days. If the TTL for the file has not expired, the host edge server returns the file directly from the cache. This results in a faster and more responsive user experience.

The next step after the CDN is Azure Application Gateway. Azure Application Gateway is a web traffic load balancer that provides traffic management in the application. A construct that, based on an incoming URL, allows traffic to be routed to a specific configured set of servers associated with the URL.

The request that goes through these steps is fulfilled by the Load Balancer tier in the Kubernetes Cluster. Load balancing evenly distributes the failover load (inbound network traffic) across a back-end resource or server farm. When an input arrives at the host Cluster, the LoadBalancer first satisfies that input and decides where to route it within the Cluster based on the need for the input and the density within the Cluster. There are multiple Nodes within a Cluster. It is the structure that decides which node the input to the Cluster will be directed.

Synergy Architecture

gRPC technology is used for bridge connection that enables communication between isolated clusters. User Applications Cluster and Bimser Cluster communicate with each other using the secure gRPC channel. Each tenant communicates with the main cluster separated from other tenants with the token allocated to it.