There is a tidal wave of technical transformation driven by recent innovation in artificial intelligence fueled by widely available massive computational resources coupled with a rising flood of big datasets. Computers are guessing our buying habits, detecting obstacles on the roads, winning the most complicated games against grandmasters, and determining our emotions through our images and prose. The opportunities are endless.
Petroleum engineering focusses on the modeling of subsurface oil and gas and forecasting production rates for optimized value for our energy dependent society. To accomplish this feat requires a combination of disparate datasets, physics-based models and expert knowledge spanning an incredible breadth of types, scales, and accuracies. Paradoxically, while the subsurface project teams are often drowning in data, direct sampling rates of the inaccessible reservoir is typical at one trillionth volume coverage. While much of the physics of reservoir formation, fluid migration and production mechanism are known, sparse sampling and reservoir heterogeneity confound.
Given this context, the question remains what is the role for big data analytics and artificial intelligence for petroleum engineering? Is it a case of little data + simple model = big data? Should the role of statistical modeling be expanded, displacing physical modeling and determinism? What are the opportunities to expand objective workflows, given the large number of subjective, experience-based decisions? How do we collect data to maximize profitability goals today and support big data analytics in the future? How do we clean and amalgamate our data to provide reliable results.