Andrea Danti - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

How can we break through the hardware load-balancer brick wall?

Listen to this podcast

As applications and services become more and more distributed, those who depend on traditional load-balancing technology may hit the 'load-balancing brick wall.'

As organizations make the transition toward more distributed computing models and the rapid development of service-based applications, those responsible for monitoring and maintaining these services may find themselves running into serious issues surrounding load balancing.

The hardware load-balancer brick wall

Today's application environments are completely different from what they were like in the 90s when enterprises starting making widespread use of hardware load balancers, explained Abhi Joglekar, lead software engineer at Avi Networks, a Santa Clara, Calif., provider of software for the delivery of enterprise applications in data centers and cloud environments. However, radical changes in the application development and delivery landscape, which sees enterprises deploying applications as tens of thousands of services spread over huge numbers of clusters, have made using the hardware load balancer a hindrance in many cases.

"There is a basic problem," Joglekar said. "How do you deploy [a high-availability] pair of load balancers across these thousands of nodes and services in a meaningful way? That's what we refer to as the fundamental technology problem, or the hardware load-balancer brick wall."

Abhi Joglekar, lead software engineer at Avi Networks Abhi Joglekar

Joglekar, who wrote a blog post on the subject, said that he refers to this as a "brick wall" because the world was, as he put it, "happily, merrily going around with these old architectures." However, sometime around 2008, big name players like Microsoft and Google, who were beginning to scale applications and run container-based applications, started to run into issues with load balancing. In the last decade, he said, they've completely replaced their hardware load balancers with distributed application delivery controllers that provide load balancing at scale.

Starting in 2010, Joglekar said, he noticed a similar trend within the modern enterprise as they continued to move toward more service-based architecture styles like microservices.

"Applications are now being deployed weekly [and] monthly and have all these requirements from the infrastructure," Joglekar explained. "This is the modern world, and, in this world, we think that hardware load balancers simply don't cut it."

What's so wrong with hardware load balancers?

Guru Chahal, vice president of products at Avi Networks, said one problem is that, often, these hardware load balancers, which can take up large amounts of space in a data center, are often barely utilizing a fraction of their actual load capacity -- a fact that Avi determined through conducting surveys about load-balancer utilization among their enterprise customers.

Guru Chahal, vice president of products at Avi NetworksGuru Chahal

"We wanted to see what utilization level do organizations run their load balancers at, [expecting maybe] 60%, 70%, 80%," Chahal recounted. "It was 10%."

At the time, this was a big surprise to Chahal and his team. However, he said that going back and examining the reasons behind it made it clear why this was occurring -- the proliferation of devices and services that require intense scaling dictated that at any given time, the capacity in the load balancers would have to be increased dramatically. Rather than add or replace existing hardware, organizations would prefer to simply add five to 10 times more capacity than they typically would to avoid having to make heavy changes down the line.

"If I'm a large bank and I need to deploy maybe 50 load balancers into an environment, do I really want to go in and change those again in six months or one year?" Chahal said. "These things are like refrigerators or mainframes in the data center. … I never want to do that again. Scaling up [and] ripping stuff is just so expensive and so complex."

In this podcast, Joglekar and Chahal explain more about the causes and consequences of hitting the load-balancing brick wall. They also explain how organizations repurpose their existing investments in load-balancing technology, as well as what they can do to avoid load-balancing issues both today and in the future, which includes things like making sure services are easily located and following what Chahal calls a principle of application "copackaging." Listen to learn more.

Next Steps

Tips for optimizing load balancing in multicloud environments

What is elastic load balancing?

Can active-active clusters use load balancers?

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you think organizations can break through the hardware load-balancer brick wall?
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide.com

SearchWinDevelopment

DevOpsAgenda

Close