1989-2004: Clusters and Commodity Based Computing

With regard to computers, one of the most significant developments during this period was the development of Seismic Unix at the Colorado School of Mines. This system was designed by Einar Kjartansson for UNIX operating systems and later popularized at Mines by Shuki Ronen. Jack Cohen and John Stockwell made this into what is now called Seismic UNIX 1997 () at the Center for Wave Phenomena (CWP) at Mines. It is now the most downloaded package for processing seismic data in the world. It works on almost all varieties of UNIX, including LINUX and the current MAC OS X. All geophysicists owe a debt of gratitude to Einar, Shuki, Jack and John for their tremendous foresight and efforts.

Without the relentless progression toward smaller and more powerful computers prestack migration as we know it today may not have ever been possible. The theory would be there (some say it was always there we just exploited it), but applying it would be as difficult as it was for Rieber. The "super computers" of the day were fast, but even the fastest Cray T90 was not fast enough to handle the ever increasing data volumes being generated by modern marine acquisition systems. These machines were also so expensive that many companies either did not have the economic resources or were simply unwilling to part with the necessary finances to acquire one.

In about 1989, two events made me believe that not only could seismic migration become an almost solely prestack process, but might in fact become the processing norm. The first of these was the recognition that one could connect several relatively inexpensive workstations together to form a powerful cluster computer, and the second was the development by Yonghe Sun of an extremely efficient beam-stack approach 2000 (,) to 3D Kirchhoff style migration. Recognition of the power of the cluster computer actually arose from running a Seismic Unix style processing stream on a dual CPU Apollo DN 10000 workstation. When this machine arrived at Amerada Hess it only had only one CPU. After plugging in the second CPU the processing stream ran twice as fast as it had with one CPU. It did not take long to realize that passing data from machine to machine was not only feasible but might result in a processing environment on which 3D prestack depth migration could be made to work both efficiently and in a cost effective manner. Sun's beamstack migration was 4 to 6 times faster than any Kirchhoff we could have written and as a result the combination of cluster computers and a fast algorithm made reasonable-sized prestack depth migration possible. By 1994 the cluster-algorithm combination could process 72 square miles (8 GOM Blocks) of input data into 36 square miles (4 GOM blocks) in 8 days on a 40 CPU IBM SP2. In late 1998 the installation of a LINUX based cluster at Amerada Hess Corporation foretold the move away from IBM-style SP2's to cheaper and more efficient PC based systems. The appearance of Advanced Data Solutions LINUX base Rebel cluster system running prestack Kirchhoff depth migration on the floor of the 1999 SEG convention confirmed that even small companies could enter the 3D depth imaging arena.

If we include the generalized phase screen methods of Wu 1992 (); Wu and M[] V (1996) its probably safe to say that, as indicated in Figure 28 most of the algorithms we use today were developed during this period. Perhaps a more accurate statement is that the general schema needed to implement these algorithms on the existing machines of the day was in place. All that remained was the implementation.