Data Conditioning for AI / ML

From a data scientist perspective, everything OpenGeoSolutions does can be considered "data conditioning" in preparation for Artificial Intelligence / Machine Learning processes.

We have a lot of experience processing seismic data to "expose the geology within" and provide end user visualization through our Navigators.

A single seismic volume is not enough.

Spectral decompositions blended in offset/angle, time/depth,and frequency, Spectral Discontinuity tm to highlight edges, spectral inversion to get at layering and interfaces, blueing and colored inversion to adjust amplitude spectra, and curvature to highlight shape and structure, elastic inversion to reveal geo-mechanical properties, and raw/denoise versions.....all these volumes convey different aspects of the seismic data and provide meaning to seismic interpreters.

In a typical project, we generate hundreds of seismic volumes, and millions of images for visualization. Essentially, we provide a uniformly sampled decomposition and visualization of all the seismic data dimensions.

In every project delivery, when we bring up the Spectral Navigatortm, the interpreter soon grabs the mouse...and starts flying the data. They will never look at all the images...but the ability to traverse all the dimensions of the data is a powerful aid to the interpretive mind.....leading to new valuable insights.

AI/ML processes show great promise to become valuable assistants for skilled interpreters. AI/ML networks work longer hours and will dutifully review all the data provided to them. Just as a skilled interpreter derives value and insight from having ALL the data.....AI/ML networks will require ALL the data....not just a single volume.