1960-1974: The digital and wave equation revolution

In the mid 1950's John Sherwood had been creatively studying elastic sound propagation using "Christmas Crackers" fireworks as sources. After finishing his thesis in 1956 he began thinking about his future. He was advised that his interest and those of the oil companies were very similar and that the latter was pretty clueless as to what they were doing. After a quick review he immediately recognized the tremendous possibilities Geophysics offered and decided that this was his industry of choice. When he arrived at Chevron in 1958 they, in his words, "were using large mechanical machines with 24 channel tape to sum over traces to get the dip first and the wavelet second." This approach was a "Frank Rieber take off." The migration approach was to determine the dip from the seismic records and then by reversing the problem figure out where to put the wavelet to construct a migrated seismic section. This idea is a bit different from the way most scientist think about migration, today. For one thing it does not implicitly use Huygens' principle. For another, its a "beam" method. What one is doing is something that more closely resembles map migration than swing arm style diffraction stack methodologies. Sherwood wasn't the only person involved with this kind of approach to seismic imaging. Several people at CONOCO in Ponca City, OK, and mostly surely at many other companies were thinking exactly the same way. In fact, in a private communications both Bill Harlan and Chuck Sword recall that in Chuck's dissertation 1987 () some very similar Russain 1962 () work is discussed and related to stereo tomography. Unfortunately, I am aware of no published work that describes any of these migration approachs in detail. The closest modern analogy is probably the wavepath migration technique of 1999 (); 2000a (); 1999 (); 2001 (); 2000b (), or the more complicated Gaussian Beam method of Hill 2001 (); 1990 (), but the method is still in use at John's company, Applied Geophysical Services. John thinks that one of his co-workers, Alan Trorey, at Chevron produced one of the first computerized Kirchhoff based methods in the early 1960's, but again my bet would be that similar efforts were going on in a number of other places including Schneider at GSI. Trorey's method was named Automatic Intelligent Migration or AIM, but was not readily accepted within Chevron. In 1967, John completed the development of "Continuous Automatic Migration on an IBM accounting machine running in San Francisco. The digital age might have been in its infancy, but there was now no question that it was running full blast.

Sometime in the late 1960's John became the "chaperon" of a young scientist currently on the faculty at Stanford University in Stanford, CA. This young man was none other than Jon Claerbout. In 1970 and 1971, Jon published two seminal papers 1971 (); 1970 () both of which focused on the use of second order hyperbolic partial differential equations to perform the imaging. The 1971 paper pretty much lays it all out. Upward and downward going waves governed by a one-way equation are coupled together with a imaging condition that produces the image. In essence, one uses a computer to model the shot waveform and downward continue the recorded traces. At each depth or time step the two wavefields are cross-correlated to produce the image at that fixed step. Keep in mind that the computers of the day were not up to the task of implementing this in the shot-profile form we do today, but nevertheless the essential theory was now in place. It is worth noting that even though Jon used one-way equations nothing expressly forbid the use of two-way equations in the imaging process.

For the most part, Jon's approach was based on finite differences. The derivatives in the hyperbolic equations were replaced with numerical approximations or differences and the forward and backward propagations were done sequentially. There were many variants of the original approach, but for many years, Jon and his students stayed dedicated to this methodology. Jon formed the Stanford Exploration Project in 1973 and its probably safe to say that for many subsequent years SEP was the leader in the development of this technology and Jon's ideas. Jon's SEP has also produced a tremendous number of the top geophysicists in the world. Its difficult or impossible to name one of Jon's students that has not been a strong contributor to the geophysical literature. The SEP together with its forerunner the GAG group at MIT were probably the basis for many of the superb consortia to come. Without these two leaders we might not have The Center for Wave Phenomenon at Colorado School of Mines, McMechan's Center for Lithospheric Studies at The University of Texas at Dallas, Jerry Schuster's Consortia at The University of Utah, The Allied Geophysical Laboratory at The University of Houston, TRIP at Rice, or maybe even Professor Berkhout's Delphi at Delft in Holland. There are many more of course, but I think I'll let the reader discover the rest.

In 1974 I was a faculty member at The University of Tulsa. Because I had done a lot of research into digital signal processing algorithms in anti-submarine warfare I was asked to teach a course entitled "Digital Methods in Geophysics." At that time the rather flamboyant Jerry Ware was directing geophysical research at CONOCO in Ponca City, OK. He invited me to come over and get an introduction (I think now what he really intended was to educate me) into what Geophysics was all about. In the process of that visit I was introduced to Dr. R. H. Stolt. Bob explained the details of a company report he had written on something called "Migration by Fourier Transform" which later appeared in Geophysics 1978 (). I was absolutely amazed. This was something very similar to some work I had done on sonar data while employed as an Engineer/Scientist at TRACOR in Austin, TX. While what I attempted was more focused on the detection of submarines, the basic equations and solutions were very similar. The link between anti-submarine warfare and seismic data processing should not have been surprising, but it was. I must admit I did not understand the geophysical aspects of Stolt's work very well, but his results were certainly convincing. I decided that maybe working on oil industry problems was not so bad after all and might even be fun. I was hooked.

The differences between Jon's approach and Bob's was quite dramatic. Because it relied on something called the Fast Fourier Transform Bob's was very fast. Even on the computers of the day 2004 ()

it could be applied routinely and was instrumental in CONOCO's success in the Lobo of South Texas. Before migration, the Lobo is just a mishmash of crossing events. The greatest risk was drilling where the reservoir was faulted out. After migration, we were finally able to actually see where the faults were located.
I have no doubt's Rieber would have understood an applauded.

While Stolt's Fourier based method was only theoretically valid for constant velocities, Jon's finite differences were reasonably insensitive to velocity variation. On the other hand Jon's method could only handle dips up to around 15 degrees while Bob's method was good up to 90 degrees. Both were one-way methods and so assumed that only upward traveling waves were recorded at the receivers. Later research coupled with the ever advancing increase in computer power would fix all of these problems and result in a tremendous variety of migration algorithm choices.

In my opinion, these two papers are significant for four reasons. First, they provided a different approach to the solution of the same problem. Second, they represented two of the first deviations from the diffraction stack approaches of the period. Third, they were both based on the same second-order hyperbolic partial differential equations. Fourth, they made it clear that one could actually digitally image data on the computers of the day. One must remember that imaging digital seismic signals was not broadly understood. During the early part of my tenure from 1984 to 1997 at Amerada Hess Corporation it was not unusual to hear one of the employees of the predecessor company, Amerada Petroleum, say "you will never be able to record enough bits to make digital as good as analog," or "the old analog data was much better than the digital of today." In fact, employees of Amerada Petroleum were very adamant about never ever going digital. Similar comments can be made about the "wave equation" used by both Claerbout and Stolt. In an almost exact analogy to the doubts associated with the original seismograph in the 1920's, this wave-equation based stuff was apparently a bit difficult to accept. Nevertheless in addition to the diffraction stack predecessor to emerging Kirchhoff approaches, there are now three additional competing approaches to computerized seismic images. The two mentioned above are at this point in time well known in the research world. The one emerging at Chevron under Sherwood and N. R. Hill was not. Maybe a better statement is that the one under Sherwood at Chevron was being forgotten or dropped in favor of more automatic wave-equation based techniques. What's important to remember is that these four methods will form the basis for the technology that is about to appear and become part of the state-of-art in the future.

Up until now the typical geophysicist lived in a two-dimensional world. Seismic acquisitions were essentially a grid of widely spaced surface lines that were thought to be two-dimensional. Prospect maps were made by contouring posted times from widely spaced 2D lines. This was and actually had to change. Even the old doodlebuggers realized that the world was three-dimensional and seismic acquisition and imaging had to evolve to make 3D imaging possible. This began roughly at the end of this period and resulted in the accelerated development of both algorithms and computer power. Two-dimensional algorithms had to become 3D algorithms and computers had to be able to process and image enormous amounts of data.