Predictive Modeling: A Simple, Practical Introductory Guide

Whether you call it predictive modeling, machine learning, or artificial intelligence, it consumes considerable attention in the technology press—and for good reason. Effective predictive modeling enhances business capabilities while improving scale and reducing staff resources. Although the press pays maximum attention to customer-facing applications, there are even more opportunities in the IT back office for predictive modeling to make a difference. Regardless, successful predictive modeling pairs a sound understanding of modeling fundamentals with the ability to recognize the right situations. In this post, we’ll explore both of these key elements of predictive modeling so you’ll be ready to use it within your IT group.

predictive modeling signified by crystal ball with scalyr colors

What Is Predictive Modeling?

Every day, we make predictions about things. For example, we predict what route to work will take the least amount of time, or we predict the weather for our kid’s soccer game this afternoon. Predictive modeling is no more or less elaborate than that—it’s just practiced on a larger and somewhat more formal level.

A model sounds like a grand construction, and sometimes they are, so let’s first agree on what a model is. A model is a system that answers a question. That’s it. I don’t even mean a computer system. The model could be a flow chart or an Excel spreadsheet or the latest deep learning creation. The model captures the essence of the system it reflects while being simpler to interpret and use. Predictive models are generally mathematical algorithms because machine learning is commonly used to make these models, but that’s not the only way.

Benefits

The intent of predictive modeling is to build a way for a business to reliably, accurately, and profitably answer questions from data. Here are four key benefits for predictive modeling.

Reduce Cost

Processes in a business combine people and technology to turn an input into an output. A process requires investment from the business: staff to execute it, managers to monitor it, and executives to resource it. If a process can be handled by a predictive model at the desired level of accuracy, it allows the business to repurpose the staff and resources while keeping (or extending) that capability.

Maximize Scale and Leverage

A business may have an incredibly profitable process, but if it’s too costly to execute at scale, its value is limited. For example, suppose a retailer knew that a customer buying items X and Y together tended to also buy item Z. If the retailer can’t get that information to the customer early enough, or if the recommendation relies heavily on staff judgment, then the value of that process degrades severely. By comparison, recommendations retain their value when they can be given to customers at precisely the right time and across the maximum number of intended customers.

Increase Response Time

A security analyst may be an expert at interpreting incident logs, but a predictive model can limit the exposure simply through faster response.

Quantify Accuracy

Accuracy in models is poorly understood. Accuracy is never the right measure of a model’s performance because it’s not nuanced enough. Suppose for a given event with two outcomes, A and B, A occurs 99% of the time. But if B occurs, it represents a significant gain or loss for the business. We can achieve 99% accuracy if we always predict A, but how useful is that model? Answer: it’s not useful at all. Instead, we can use our predictive model to have a better conversation.

A model tuned more aggressively to false positives (predicting B when it’s actually A) offers better value if it also increases the true positive rate if we know how to respond to it.  For example, let’s predict a confidence score instead of a binary value. Then, we can loop in a human evaluator if the confidence score is in a middle range. Rest assured, humans are also making these errors if they’re part of the process.  Now though, our predictive model allows a richer conversation about the situation and how to extract value from it.

Modeling for DevOps

Much of the time when businesses get excited about predictive modeling and its cousins, it’s about customer-facing or revenue-generating uses. Interestingly, this is the worst place for a business new to predictive modeling to start because they’re the least prepared for it. The preferred on-ramp to predictive modeling is automating back-office processes, and IT is particularly fertile ground.  We’ll look at potential predictive modeling applications through a DevOps lens in both development and operations

Predictive Modeling for Development

Application source code opens up interesting possibilities for predictive modeling, but one thing makes it a challenge: it’s all text. Classic machine learning relies on tabular data (rows and columns). Any text present there tends to be statuses or categories. Although converting that text to some numeric column is necessary, the options are well understood. Text like source code, though, is a larger challenge.

The Source Code Opportunity

Consider the things you could do with source code if you could model it:

  • Was any code that was submitted for peer feedback today poorly designed?
  • Did we commit any bugs today?
  • Do any new libraries we’re using have security vulnerabilities?
  • How well does the code align with our coding standards?

You’re already answering these questions if you work in software development, but at what cost and effort? If you’re like most firms, developers submit code to be reviewed. Even with a checklist, the review is likely tribal and subject to the individual reviewer. Also, security is constantly evolving, so most developers have a hard time staying current on secure coding. Lastly, how often are dependent libraries reviewed? Seldom, if ever.

A Predictive CI Process

Now imagine that as part of your CI process, a series of automated checks score your code for design and security. If the scores aren’t high enough, the build is rejected and you’re forced to resolve them. This is no different than automated testing—we’ve shifted these code evaluation processes left and automated them. But how can we use source code to achieve any of this?

Making Predictions

Making predictions from text is difficult—there’s no way around it. We as humans are great at language, but so far that hasn’t translated to the machine world. That said, new ground is broken every day, and there are techniques to make it approachable. The representation of text for predictive modeling is always the first hurdle.

Word Embedding

How close are the concepts of “cat” and “dog”? What about “dog” and “puppy”? Shouldn’t the relation of dog, puppy, and cat let us derive kitten? Word embedding allows you to do this type of computation from language. An embedding gives me a vector of numbers for every word. With those vectors, I can now use a variety of techniques to build a model, including clustering or distance-based models and deep learning.

The challenge with word embeddings is how to create them. You can find existing embeddings out there that have been built, but the key is, what set of documents were they built from? GLoVe is a popular embedding that was built from a massive web crawl. That’s great if you’re building a model from text that’s similar to its source material—not so good for source code. But if an embedding was built from a massive code repository (in the appropriate language), you’d be in business! If your company is big enough to have such a repository, you can build your own proprietary one. If not, consider using publicly available sources or find one you can license for your use.

Deep Learning

Deep learning is a specific practice of using neural networks with a lot of layers. It adapts well to text-based learning because several deep learning algorithms (such as convolutional neural nets or recurrent neural nets) are based on sequence. Also, you’re still free to use word embeddings or any other text learning procedure as you see fit. Deep learning is a big subject on its own, but it offers significant advantages over standard machine learning.

Suggestions

NLP, as the name states outright, is about natural language. Not as much scholarship exists on source code languages. First, see if the market has anything to offer and avoid building it yourself. If nothing exists and you’re going to build something in-house, I’d recommend partnering with an NLP firm. They’ll have the raw NLP expertise to supplement your in-house expertise. Lastly, if that’s not an option, consider hiring an NLP consultant to work with you directly. Just know that there could be substantial investments to be made if you go that route.

Predictive Modeling for Operations

Operations data presents a different challenge. Operations data tends to be highly variable, including human-managed support tickets, monitoring data, and logs. Also, the volume itself and the lack of labeling are challenges.

The Operations Opportunity

Predictive modeling saves a lot of time and effort in operations if it can be modeled effectively. Some of the avenues it affords include the following:

  • Which security incidents should we explore further?
  • What application will go down next?
  • What server will go down next?
  • How much traffic should we expect when we launch our new product?
  • How satisfied are our customers (internal or external) with our support services?

Making Predictions

All the text-based predictive modeling covered above applies in operations as well. Support tickets especially can be a rich vein to mine. Word embeddings and other NLP techniques apply more directly because the support ticket text will be a spoken language. The three new challenges: anomalies, the time-based nature of the data, and the diversity of sources.

Anomalies

Operations models tend to be more anomalous in nature. For example, 99.9% of the time, logs are innocuous and innocent. Once in a while, though, a meaningful entry appears. Also, much of the log data isn’t labeled, so a model doesn’t always have solid target definitions. Anomaly detection focuses on identifying those special cases.

Time-Series Modeling

A lot of operations data includes time-based logs from servers, applications, monitoring, access controls, and devices. Time-series modeling is used in finance, so while it’s new to machine learning, it’s not new in general, and some of those techniques will apply. Deep learning is still a great tool to use here for the same reasons it worked for text.

Source Diversity

Server and application logs combined with other observability measures give a rich data set for modeling. However, preparing the data for modeling is the challenge, because it takes effort to relate each data point together. This is vital for effective modeling. It’s time-consuming, so be sure to build a repeatable process or system to handle it for you automatically.

Suggestions

Focus your efforts on predictive modeling for operations on cases where response time makes a difference or the volume of data is high. Because predictive models can decide more quickly than a human can, use the models to prioritize human workflows.

Wrap Up

In this article, we learned about predictive modeling for software development and operations. We reviewed the basics of predictive modeling and explored specific questions that apply in each area. Predictive modeling in IT lowers staff costs and refocuses staff time onto higher-value areas.