The current issue of the journal Science is devoted to data -- its collection and analysis. In many fields -- astronomy, elementary particle physics, genomics, climatology, and others -- the automated collection of data is outstripping the ability of scientists to search the data for meaning. Too many numbers, not enough time. Even pattern-finding computer algorithms barely make a dent in the accumulating mass of digitally archived information. As Daniel Boorstin writes: "Before the age of the mechanized observer, there was a tendency for meaning to outrun data. The modern tendency is quite the contrary, as we see data outrun meaning."
It seems to me that we are on the threshold of a new kind of science -- wikiscience.
Professionally trained scientists will continue as now, designing the instrumentation to automatically amass data from telescopes, particle accelerators, genome sequencers, etc., and will have first looks at the data. The data will be archived -- massively -- on the internet and available to the public. Tech geeks, teenage prodigies, unconventionally-trained geniuses, and amateurs of all stripes can mess around to their heart's content. Open house. Free-for-all.
Of course, it's already happening to some extent, but just wait until it really takes off. And don't be surprised if some really good science comes out of the blue -- a revolutionary idea from an 18-year-old whiz-kid in the back-of-the-beyond of China, say.
Terabytes of data, waiting to be explored.