adam121 - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Distributed computing models require tooling and people skills

The dream of distributed computing has clearly become a reality for many -- a reality that has forced development teams to think hard about how they manage their resources.

Whether it's distributed computing models or the people involved in maintaining them, the message for developers is clear: If companies stretch resources too far, performance falters. Avoiding this requires having the right tooling in place and considering advancements in serverless computing. However, it also requires that software teams think hard about the notions they have about their human resources.

Such was the highlight of the Tuesday morning keynotes at the O'Reilly Software Architecture Conference in New York City. Three keynote speakers hit upon three major angles of enterprise software management today: the caveats of new development methods, the importance of managing people and the promised benefits of serverless computing.

Here is a look at the three keynote speeches and the messages speakers wanted to send people home with. 

More services, more problems?

Author and software architect Mark Richards, one of O'Reilly's in-house experts, kicked off the keynote speeches by talking about microservices -- including both the pretty and ugly sides of shifting your enterprise architecture to a more distributed one like microservices.

Companies are putting a lot of stock in the competitive advantage that distributed computing can provide for a company, he said. But there's one thing about a monolithic architecture that seems to get lost as organizations pursue more distributed computing models: The monolith was stable, it was simple and it was reliable.

Enter the world of distributed computing. As companies move toward a more modern and distributed model, agility, deployment speeds, testability and scalability all go up. But while all these things are great, there is a caveat -- performance, simplicity and reliability goes down. Take distributed computing models to the extreme end of microservices, and you are looking at extremely agile systems that may leave you missing your old reliability.

Of course that doesn't mean microservices are bad. But what it does mean, he pointed out, is that companies need to be aware of this, and the tooling and architecture design considerations they make are going to be key in making sure your organization reaps the benefits of a distributed model without completely sacrificing the performance and reliability of your systems.

Despite these warnings, developers like Hugo Pelletier of SSENSE, a clothing retailer, said that they have not necessarily experienced these kinds of performance, reliability or stability issues, even though they are very heavily invested in microservices.

"For us, performance is not an issue because we can scale every service independently," he said, saying they use Kubernetes to manage load balancing of services. Pelletier admits that while there does seem to be a general perception that leveraging microservices will negatively impact performance, he believes that this is more related to the way that services are coupled rather than a problem with microservices in itself.

Author and software architect Mark Richards.
Mark Richards talks about the effects increasingly distributed computing can have on performance, stability and other factors.

Computers are easy; people are hard

The next keynote was given by Bridget Kromhout, principal technologist at Pivotal. She said that when she first meets with Pivotal customers interested in making the move to a more distributed computing model, she notices that there always seems to be an exclusive focus on tooling. This, she said, is a mistake.

"The tooling is not enough," she said, adding that only by properly configuring your architecture and paying attention to strong, fundamental development and management practices will distributed computing models work.

This led her to make note of Conway's Law, which dictates that "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

Kromhout is hesitant to take this purely at face value, but does believe that focusing on what your development and operations teams are capable of -- or, perhaps, not capable of -- may very well dictate your success factor with things like microservices. To this point, she spent time debunking the "fallacies of distributed people," a set of eight ill-conceived notions about developers and operations teams that is based on the eight "fallacies of distributed computing," a set of assertions made more than a decade ago by computing experts at Sun Microsystems.

In her presentation, Kromhout debunked these fallacies one-by-one, ultimately making the point that all the fallacies lead people to believe there is some magic hidden in distributed computing that makes it absolutely impervious to problems that may be reflected in the way organizations think about the people on their software teams.

So what were some of the truths behind the fallacies? Here are a few that were particularly interesting:

  • The network is reliable: You can never assume your network will have 100% uptime: there are power failures, severed connections, etc. Likewise, you can't assume your people will always be there. People get sick, they go on vacation, they quit ... the list goes on and on.
  • Bandwidth is infinite: You can't force as much information as possible into your network. You can only do so much before you hit your limit. Likewise, people burn out if you try to get them to do too much. No one is a super developer.
  • There is one administrator: Unless you work only with small, isolated LANs, you are hardly ever going to find a situation where there is only one system administrator. Administrators are assigned based on expertise, and everyone is responsible for their own part. Much in that same way, people have their own agendas and opinions, and everyone feels ownership, at least in some part, over their projects.

Conference attendee Leah Cunningham, lead agile coach at SPS Commerce, a provider of cloud-based supply chain management software, said while she believes both the technology and people are equally important, it's critical to recognize that people are unique, and thus it is important to understand what unique benefits each person provides, as well as what unique challenges they may face.

"We all come with our own unique failure modes and our own bugs, so to speak ... so many different nuances," Cunningham said. "We're very unpredictable."

She added that people, especially in the software industry, may be looking for clear direction in terms of doing their jobs -- for instance, a software pattern to follow or a particular technology to employ. However, she said that while those "magic formulas" that professionals seek may work for one group of people, trying to apply the same formula in a different group of people may prove to be a challenge. This, she said, can simply be due to the differences between people.

"You may have success in some organizations, and then go elsewhere with that same recipe and struggle," she said. "And it's because people ... they're unpredictable."

Cunningham added that this truth only serves to highlight the importance of clear communication within an organization and among your peers. Making sure that everyone is on the same page, she said, is critical to success.

The power of serverless computing

The keynotes ended with a whole-hearted advocacy for serverless computing by Symphonia's co-founder Mike Roberts. Serverless computing, which Roberts said is the "next evolution of the cloud," embodies two particular things: backend as a service (BaaS), and functions as a service (FaaS).

BaaS differs from traditional software as a service (SaaS) in the sense that it is about outsourcing application components rather than organizational processes. Two options Roberts highlighted were Firebase, a Google option popular with mobile devs, and Auth0, a BaaS aimed more toward enterprise software management.

The other side of serverless computing, FaaS, allows teams to only focus on operations by allowing an application to be deployed within a host instance such as a container or VM host. FaaS leverages a number of triggering events, including message bus, network file system, time event and http requests. Amazon Lambda is a well-known FaaS choice.

So why go serverless? For one, Roberts said, you don't have server hosts and server processes to manage. The service providers will analyze your data loads and scale applications appropriately, also eliminating your need to manage load balancing. You don't pay for what you don't use, and vendors will take care of your patching and other general maintenance needs. The caveat, he said, is that you have no guarantee of disaster recovery, a notion worth considering since certain outages have made the news lately.

Weighing in again, Pelletier said that while his team does take advantage of the Amazon Lambda function, they do not leverage serverless computing in a big way.

"I think when I get back to work, we'll talk about it to see if instead of adding pods or instances of servers, we can use those services to handle daily calculations," he said.

However, he does not believe they are at a point where they can go completely serverless, particularly when it comes to outsourcing code that may be tightly linked to particular business processes and, potentially, intellectual business property.

"The code is based on the business needs, so we have to transfer everything into a serverless space, and I'm not sure the company would be totally willing to do that, for security purposes,"

Pelletier also said he would be concerned about manageability if and when features were to start running on a varied collection of serverless offerings. However, he remains open to the idea and plans to discuss it with his co-workers when he gets back to the office.

"Maybe we can just do a proof of concept and see if we can go that path," he said. "It's a good question."

Next Steps

Learn how enterprise momentum for microservices builds

Distributed computing became cool again, thanks to IoT

A developer's guide to containers and microservices

Dig Deeper on Application integration architecture

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What do you find to be the most monumental challenges of distributed computing models?
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide

SearchWinDevelopment

Close