Shared Load Balancer

The Jelastic Platform provides you with a Shared Load Balancer (resolver). It represents an NGINX proxy server between client side (browser, for example) and your application, deployed to the Jelastic Cloud.

Shared LB processes all incoming requests, sent to an environment domain name ({user_domain}.{hoster_domain}), which entry point (balancer, application server or even database) does not have a public IP address attached.

shared load balancer illustration

The common Shared LB processes the requests, sent to all of the applications, located within the same hardware node. In order to be protected from DDoS attacks, Shared Load Balancer is limited to 50 simultaneous connections per the source address of the request.

To increase the high availability of the system, Jelastic uses several synchronized Load-Balancers, placed at different nodes, for handling requests simultaneously. All of them work with a single data storage, which makes them fully interchangeable in case of any issue occurs at one of the instances.

shared load balancer failover

As a result, there can be several entry points for users’ environments, used at the same time. In this way, the incoming load can be effectively distributed.

Note: 

We recommend to use Shared Resolver for your dev and test environments. As for production environments, which are intended to handle high traffic, it is more appropriate to use your own public IP for getting and processing the requests. Also, it allows you to apply a number of additional options to your application, which may help to make it more secure (e.g. with Custom SSL) and responsive (through attaching Custom Domain).

Public IP vs Shared Load Balancer

Backend Health Check with Shared Load Balancer

Jelastic Shared Load Balancer performs constant servers’ health checkups, utilizing the NGINX upstream check module with the following settings for that:

check interval=15000 rise=2 fall=3 timeout=2000 default_down=false;

In such a way, all containers are considered “up” from the startup, whilst the system verifies their availability each 15 seconds. If no response is received from a container within 2 seconds, such checkup is regarded as failed. Three consecutive fails will mark a node as “down”, whilst two successful checks in a row - as “up”.

As for the traffic distribution within a separate environment, a dedicated load balancer node is automatically added to its topology when the number of application server instances is set more than one (i.e. it’s scaled out horizontally). Jelastic PaaS provides 4 load balancer stacks you can choose between, each of which has some health check configuration specifics:

  • NGINX - runs a simple tcp check (i.e. verifies the required server port availability) right before routing a user request to it; if the check fails, the next node within a layer will be tried
  • HAProxy - performs regular tcp checks (every 2 seconds by default), storing the results in a table of backends state and keeping it constantly up-to-date
  • Apache Balancer - no implemented health check procedure by default
  • Varnish - all backends are assigned with the following parameters in balancer configs (so that the health checks are performed once per minute with a 30 seconds timeout):
probe = { .url = "/"; .timeout = 30s; .interval = 60s; .window = 5; .threshold = 2; } }

Obviously, the default health check settings can be manually adjusted up to your needs (through either Jelastic File Manager GUI or via SSH) according to the appropriate load balancer stack specification - refer to the official NGINX, HAProxy, Apache Balancer or Varnish documentation to see the details on possible settings.

Deny Access via Shared Load Balancer

Jelastic PaaS provides a predefined option to disable external access to environment nodes from SLB. It prohibits access to containers over their default domain names with a single click (without public IP addition and firewall adjustment). An option is available as the Access via SLB toggle in the topology wizard.

access via SLB

Note: When adding a Public IP, the platform automatically disables Access via SLB for the same layer. Such configs are recommended as they provide the highest security level for your application. However, in case of necessity, you can re-enable Access via SLB to use both options simultaneously.

The option is enabled for each layer by default, which ensured the following behavior:

  • nodes are accessible from the Shared Load Balancer via environment domain names using the default ports (80, 8080, 8686, 8443, 4848, 4949, 7979)
  • the Open in Browser button opens the appropriate service (e.g. database admin panel)
  • nodes’ links are present in the emails (if needed)

You can manually disable the Access via SLB feature:

  • nodes are inaccessible from the Shared Load Balancer - layer is isolated from the SLB
  • the pages accessible via the Open in Browser button in the dashboard return the 403 Forbidden error instead of the intended service
  • nodes’ links are excluded from the emails
  • access via SSH and through endpoints is not affected

For better visibility, layers with the disabled SLB access are provided with the appropriate label in the dashboard.

no SLB access label

Connecting to such nodes via the default URL will return the following error page instead of the default service:

403 forbidden access

Below, we’ve prepared some of the most frequent use case examples for the feature:

  • close public access via SLB to nodes that are intended for internal access only (e.g. databases)
  • forbid access via SLB to nodes with public IP address attached and custom domain configured
  • configure topology that allows connection via environment load balancer but prohibits access via direct URL to containers

In general, you can use the Access via SLB option for your development and testing environments. However, we recommend to disable the feature for the application in production and use public IP with custom domain instead.

What’s next?