idspopd - Fotolia
NGINX has released the latest version of its NGINX Plus development platform, Release 12 -- a release which, according to Chris Lippi, vice president in the NGINX product and technology group, is focused on the increasingly dynamic nature of application development.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"Everything is just getting bigger and more dynamic with our customers," Lippi said. "A lot of what we've built in the R12 [focuses on] programmability and scalability."
NGINX regularly updates NGINX Plus -- typically every four months. However, R12 has brought major feature additions and management capabilities that warrant a closer look. Here, we look at some of the most significant changes to NGINX Plus following the release.
NginScript reaches maturity
Better cluster management
According to Lippi, NGINX has also made significant amendments to its clustering capabilities and the management of those clusters.
"We've added a number of features to the clustering ability both on how you manage clusters of NGINX and also the back-end resource pools of the underlying applications," Lippi said.
Lippi said one of the major additions was implementing a function that allows developers to identify a master node that pushes changes appropriately to everything connected to that particular node. This is particularly useful for customers who need to manage configuration changes among large numbers of individual servers or services.
"You can effectively define a master NGINX node in that cluster, and all the flows will inherit from it," Lippi said. "We have customers that run hundreds, if not thousands, of NGINX servers in their infrastructure. So, it's important to make sure that when you roll out configuration changes, you're ensured that those changes take effect appropriately across the clusters."
Slow-start health checks
Lippi also highlighted changes in the NGINX Plus application health-check functionality.
For some time, Lippi said, NGINX has allowed developers to code their own application health checks and create a health-check process that focuses on the unique needs of their individual applications.
"You might write a script that understands if an underlying data tier resource is still available or not," Lippi explained. "It's not just if the server is up and running; it is if the underlying application is healthy."
In NGINX Plus R12, Lippi said, it is now possible to perform what's called a "slow start" of servers running though NGINX Plus if they have been newly added, but not checked to see if that particular server will run without error -- whether it is due to a lack of resource availability for the server or some other kind of condition specified in a customized health check.
"You effectively reload the configuration of NGINX, so we now know that that server exists," he explained. "And rather than just start routing traffic to it immediately ... we actually won't add it until the health check is successful. We'll wait until that health check is green, and then start routing traffic to it."
Lippi said this is particularly useful as organizations begin to run even more instances of servers that are added and removed very rapidly, sometimes even only running for a minute at a time. Before, he said, the addition of a server not optimized to run would result in a flood of 500 errors, "server unavailable" messages or any other number of failures. Now, organizations can avoid those hang-ups by having each server run through this automated application health-check process.
"We have environments that we deploy in where servers are coming and going extremely rapidly -- the server might exist for a minute and then disappear," he said. "So, our ability to make sure that you're not increasing error rates in that kind of an environment is really helped by this feature."
The NGINX Plus R12 update also includes a new slew of metrics associated with those application and server health checks. NGINX Plus provides a status page to monitor server status; now, organizations can get more insight into how those servers are behaving. This is done by taking TCP and User Datagram Protocol traffic information they received and defining the specific error that might be occurring more clearly.
"We've added a variety of new metrics on that status page -- things like server response time, what's going on in shared memory zones, specific error codes," Lippi explained. "There are customers who like to get on the box and see these stats directly when they're having problems."
NGINX Plus also supports the exporting of this information from a JSON format into whatever monitoring tool a particular operations or DevOps team might be using, such as New Relic or AppDynamics.
Lippi said R12 also provides a new ability to manage stale caches in a richer way. Stale caches are essentially a cached object that is given a certain amount of time before it is considered expired. Traditionally, the application would have to go back to source and pick up a new version of that object in order to use it again. With the new stale-caching capabilities found in R12, it's possible to use that stale version of the object while the new object is being retrieved, making it possible to create a more uninterrupted experience for the end user.
"What these directives allow you to do is effectively manage that stale moment," Lippi said. "Either you're going to go back and have to go pick up that object and have the client side wait for that new version of the object to come back in the cache, or you can use the directives to effectively say, 'Well, if it's stale while you're going back and getting the new version of that object, in the meantime, serve up whatever version of that object that you might have already.'"
Lippi said this is especially important for their customers working with video content. It can take anywhere from seconds to minutes to load a very large video file, and at that point where objects would normally expire and be cached, there might not be a new item available yet. This way, the application can opt to serve up a little bit of dated content and not have that client wait their time out.
"It's just one of the directives that you can load in terms of how you manage caching," Lippi said. "But it's been important for people ... specifically around people that are running video content through our service."
A breakdown on Python, Ruby and other scripting languages
Using clustering to deploy containers in production
Dig Deeper on Microservices pattern, platforms and frameworks