May 7, 2018 by

Machine Learning: Is it really a Black Box?

Machine Learning isn’t the “black box” that many perceive it to be. On complex data sets, the use of Machine Learning with a rigorous process and supporting visualizations can yield far more transparency than other methods. What is a “Black Box”? Machine learning models are sometimes characterized as being Black Boxes due to their powerful ability to model complex relationships between inputs and outputs without being accompanied by a tidy, intuitive description of how exactly they do this. A “Black Box” is “a device, system or object which can be viewed in terms of inputs and outputs without any knowledge of its internal workings” (Source: Wikipedia). Black Boxes (and Machine Learning models) exist everywhere We tend to label things as “Black Boxes” when we don’t trust them more than when we don’t understand them. Machine Learning models aren’t unique in having an element of “mystery” in how they work – there are all sorts of things we trust all around us for which we don’t fully understand the inner workings. GPS, search engines, car engines, step counters, even the curve fitting algorithms in Excel are examples where we trust what’s happening inside because we’re able to see and, with experience,...

Read More
March 2, 2018 by

Machine Learning: Finding the signal or fitting the noise?

Before machine learning came along, a typical approach to building a predictive model was to develop a model that best fit the data. But will a model that best fits your data provide a good prediction? Not necessarily. Fortunately, there are machine learning practices that can help us estimate and optimize the predictive performance of models. But before we delve into that, let’s illustrate the potential problem of “overfitting” your data. Fitting the Trend vs. Overfitting the Data For a given dataset, we could fit a simple model to the data (e.g., linear regression) and likely have a decent chance of representing the overall trend. We could alternatively apply a very complex model to the data (e.g. a high-degree polynomial) and likely “overfit” the data – rather than representing the trend, we’ll fit the noise. If we apply the polynomial model to new data, we can expect it to make poor predictions given it’s not really modeling the general trend. The example above illustrates the difference between modelling the trend (the red straight line) and overfitting the data (the blue line). The red line has a better chance of predicting values outside of the dataset presented. Due to the powerful...

Read More

Visual Analytics in the New Normal: Past, Present & Future

Over the last decade, the collective impact of changes in production technologies, the shift in commodity prices and a reduced workforce is defining a ‘new normal’ in oil & gas. This presentation provides a visual overview of evolving production and market trends and explores the role data and technology will play as the industry adapts to its changed landscape. “Visual Analytics in the New Normal: Past, Present & Future” was originally presented as part of the geoLOGIC Adapting to the New Normal technical showcase on Nov. 16th, 2017.

Download Read More

How Many Months of Production Do I Need for a Reliable Forecast?

Production forecasts derived from Type-well Curves are typically based on limited data. Our desire to predict the production performance of recent wells with limited data introduces uncertainty that is difficult to quantify. There is also an industry tendency to rely on early predictors like IP90 as a comparative measure of well performance. This study includes a look-back involving more than 87,000 forecasts (Montney and Viking wells) to see how much data you need for an 80% confidence interval. It also compares how much data you need from both a Volumes(Reserves) and Value perspective.

Download Read More
Client Stories

Producer uncovers millions in operational improvements while establishing visual analytics culture

One of our favourite things is when our clients are able to get more value out of VERDAZO than they originally expected. We love watching visual analytics cultures take hold, as one project inspires another and users across organizations discover new uses for our product. At one intermediate Canadian producer, a project focused on minimizing downtime yielded a $12 million operational improvement on just a single group of wells. But that was just the start of what they did with VERDAZO. Here’s their story.

Download Read More

How a producer started its analytics journey

A producer in Calgary used our VERDAZO software as part of an initiative to improve their Well Review process. Their results went beyond their expectations: they were able to free up their engineers from days of manual Excel analysis in advance of a well review, saving time that exceeded $175,000 in annual costs. That initial project also spawned something the producer couldn’t have initially predicted: a commitment to analytics that eventually spread across the organization. New projects were created and an analytical culture took hold as the producer uncovered new efficiencies and value in unexpected places.

Download Read More
Product Infosheets

Machine Learning

Predict production performance, reservoir properties and optimize well location & completion design.


Public Data

Explore public data. Identify business opportunities by discovering insights into companies, plays and completion technologies.


Financial Data

Analyze your accounting data. Optimize financial performance by identifying operating cost issues and revenue opportunities.