Gas fields in green and oil fields in red
Today, the Australian geoscience company Frogtech Geoscience released the North Atlantic Regional SEEBASE® Study and GIS, a new comprehensive regional study of basement architecture in the North Sea and the North Atlantic conjugate margins of Norway, Greenland, UK and Ireland.
The unique SEEBASE model visualises basement topography at the geologically complex Baltica-Avalonia-Laurentia collisional triple junction. The basement focused analysis overcomes the challenge of interpreting basement beneath basalt and/or salt on seismic data, showing that the deepest North Atlantic depocentres are floored by shallow mantle below hyperextended crust.
Basin trends in the prospective North Atlantic and North Sea were directly controlled by pre-existing Caledonian structures and reinforced during the Variscan Orogeny. Paleogeography in reconstructed space shows North Sea basement platforms as barriers between contrasting marine and non-marine clastic depositional environments in the Early to Mid-Paleozoic. Reef complexes developed above these Caledonian-cored platforms, forming important yet underexplored Paleozoic reservoirs.
The study leverages Frogtech Geoscience’s proprietary potential field geophysics interpretation process that includes detailed integrations of gravity and magnetic data, basement terranes and composition, and iterative tectonic and reconstructed paleogeographic analysis, to produce the SEEBASE depth-to-basement and present-day basement heat flow model.
The government funding of R&D has no economic benefit. Government funding for R&D crowds out private funding to the detriment of economic growth. The direct effect of public research is negative.
The myth that science is a public good may be the longest-surviving intellectual error in western academic thought, having started in 1605 when a corrupt English lawyer and politician, Sir Francis Bacon, published his Advancement of Learning.
The world’s leading nation during the 20th century was the United States, and it too was laissez faire, particularly in science. As late as 1940, fifty years after its GDP per capita had overtaken the UK’s, the U.S. total annual budget for research and development (R&D) was $346 million, of which no less than $265 million was privately funded (including $31 million for university or foundation science). Of the federal and state governments’ R&D budgets, moreover, over $29 million was for agriculture and $26 million was for defense, which is of trivial economic benefit. America, therefore, produced its industrial leadership, as well as its Edisons, Wrights, Bells, and Teslas, under research laissez faire.
Meanwhile the governments in France and Germany poured money into R&D, and though they produced good science, during the 19th century their economies failed even to converge on the UK’s, let alone overtake it as did the U.S.’s. For the 19th and first half of the 20th centuries, the empirical evidence is clear: the industrial nations whose governments invested least in science did best economically—and they didn’t do so badly in science either.
It was the First World War that persuaded the UK government to fund science, and it was the Second World War that persuaded the U.S. government to follow suit. But it was the Cold War that sustained those governments’ commitment to funding science, and today those governments’ budgets for academic science dwarf those from the private sector; and the effect of this largesse on those nations’ long-term rates of economic growth has been zero. The long-term rates of economic growth since 1830 for the UK or the United States show no deflections coinciding with the inauguration of significant government money for research.
Modern science is built in discreet, publishable units that allow us to test specific ideas. This is what careers are built upon, starting with smaller research projects when we are students and building to larger themes when we are (more or less) established. The foundation of these projects is most often an experiment or study done in a relatively short period of time, designed to address a specific question. We conduct our experiment, get an answer, and then move on to the next question.
The tricky, but ultimately critical, thing to know is whether this short term effect is representative of what happens year after year, or whether there will be other consequences if we continued the experiment for another week, year, or decade.
Thankfully, science is full of famous exceptions to this pattern of short and sweet studies. The Grant’s study of evolution in Darwin’s finches in the Galapagos, the Hubbard Brook study of nutrient dynamics following forest harvest in New England, and the Park Grass experiment of agricultural amendments in Rothamsted, England are excellent examples of how truly transforming a long term perspective can be to our understanding of science.
All of these studies have generated surprising results along the way, surprises that could not have been anticipated, but were nonetheless critical. It is in these surprises, these unexpected results, that the true value of a long term approach can be seen. Besides documenting long-term impacts of fertilizers, the Park Grass experiment also documented radioactive fallout entering ecosystems. This clearly cannot have been an anticipated effect as the study began in 1856 and radioactivity was not described by Henri Becquerel for another 40 years.
Similarly, no one expected the evolutionary trajectory of finches to change with the development of El Niño ocean currents or acid rain to start to change nutrient chemistry in soils and steams, but now we have a much better understanding.
A good reason to study something today may be an equally good reason to study it tomorrow and the next day. We are able to use decades of data in long-term studies to address modern new theories that take a dramatically different approach to science. These data then become a wonderful complement to the short term studies that will always dominate science – a context for judging what is truly important. After all, what do we really know?
Science, the pride of modernity, our one source of objective knowledge, is in deep trouble. We found that only six out of 53 landmark published preclinical cancer studies could be replicated. Researchers at a leading pharmaceutical company reported that they could not replicate 43 of the 67 published preclinical studies that the company had been relying on to develop cancer and cardiovascular treatments and diagnostics. Only about a third of psychological studies published in three leading psychology journals could be adequately replicated.
Much of the scientific literature, perhaps half, may simply be untrue. The false discovery rate in some areas of biomedicine is 70 percent. The non-replication rates in biomedical observational and preclinical studies is 90 percent.
It is easy to believe that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science, guiding it in its most productive directions and providing continual tests of its validity, progress, and value. Technology keeps science honest. Basically, research detached from trying to solve well-defined problems spins off self-validating, career-enhancing publications like those breast cancer studies that actually were using skin cancer cells. Yet no patients were cured of breast cancer. The truth test of technology is the most certain way to tell if the knowledge allegedly being generated by research is valid. The scientific phenomena must be real or the technologies would not work.
The military-industrial complex generated the targeted scientific results that led to many of the technologies that have made the modern world possible, including digital computers, jet aircraft, cell phones, the internet, lasers, satellites, GPS, digital imagery, and nuclear and solar power. Research should be aimed more directly at solving specific problems, as opposed to a system where researchers torture some cells and lab mice and then publish a dubious paper.
Academic science, especially, has become an onanistic enterprise. End-user constituencies—patient advocacy groups, environmental organizations, military planners—outside of academia should have a much bigger say in setting the goals for publicly funded research. The questions you ask are likely to be very different if your end goal is to solve a concrete problem, rather than only to advance understanding. That’s why the symbiosis between science and technology is so powerful: the technology provides focus and discipline for the science.
Science is increasingly being asked to address such issues as the deleterious side effects of new technologies, or how to deal with social problems such as crime and poverty. These are questions that though they are, epistemologically speaking, questions of fact and can be stated in the language of science, they are unanswerable by science; they transcend science. Such trans-scientific questions inevitably involve values, assumptions, and ideology. Consequently, attempting to answer trans-scientific questions, inevitably weaves back and forth across the boundary between what is known and what is not known and knowable.
The great thing about trans-science is that you can keep on doing research, you can create the sense that we’re gaining knowledge without getting any closer to a final or useful answer. Some contemporary trans-scientific questions: “Are biotech crops necessary to feed the world?” “Does exposure to synthetic chemicals deform penises?” “Do open markets benefit all countries?” “What will the costs of man-made global warming be in a century?” “What can be done about rising obesity rates?” “Does standardized testing improve educational outcomes?” All of these depend on debatable assumptions or are subject to confounders that make it impossible to be sure that the correlations uncovered are actually causal.
Consider climate change. The vaunted scientific consensus around climate change applies only to a narrow claim about the discernible human impact on global warming. The minute you get into questions about the rate and severity of future impacts, or the costs of and best pathways for addressing them, no semblance of consensus among experts remains. Nevertheless, climate models spew out endless streams of trans-scientific facts that allow for claims and counterclaims, all apparently sanctioned by science, about how urgent the problem is and what needs to be done.
Vast numbers of papers have been published attempting to address these trans-scientific questions. They provide anyone engaged in these debates with overabundant supplies of peer-reviewed and thus culturally validated truths that can be selected and assembled in whatever ways are necessary to support the position and policy solution of your choice. It’s confirmation bias all the way down.
Dredging massive new datasets generated by an already badly flawed research enterprise will produce huge numbers of meaningless correlations. Since the integrity of the output is dependent on the integrity of input, big data science risks generating a flood of instances of garbage in, garbage out, or GIGO. The scientific community and its supporters are now busily creating the infrastructure and the expectations that can make unreliability, knowledge chaos, and multiple conflicting truths the essence of science’s legacy.
Ultimately, science can be rescued if researchers can be directed more toward solving real world problems rather than pursuing the beautiful lie. In the future, the most valuable scientific institutions will be those that are held accountable and give scientists incentives to solve urgent concrete problems. The goal of such science will be to produce new useful technologies, not new useless studies. Contemporary science isn’t self-correcting, it’s self-destructing.
Most scientific papers just aren’t true. But we’re continually assured that government policies are grounded in evidence, whether it’s an anti-bullying program in Finland, an alcohol awareness initiative in Texas or climate change responses around the globe. Science itself, we’re told, is guiding our footsteps. There’s just one problem: science is in deep trouble. Most of the scientific literature is simply untrue and science has taken a turn toward darkness.
It’s a worrying thought. Government policies can’t be considered evidence-based if the evidence on which they depend hasn’t been independently verified, yet the vast majority of academic research is never put to this test. Instead, something called peer review takes place. When a research paper is submitted, journals invite a couple of people to evaluate it. Known as referees, these individuals recommend that the paper be published, modified, or rejected.
If it’s true that one gets what one pays for, let me point out that referees typically work for no payment. They lack both the time and the resources to perform anything other than a cursory overview. Nothing like an audit occurs. No one examines the raw data for accuracy or the computer code for errors. Peer review doesn’t guarantee that proper statistical analyses were employed, or that lab equipment was used properly. The peer review process itself is full of serious flaws, yet is treated as if it’s the handmaiden of objective truth.
And it shows. Referees at the most prestigious of journals have given the green light to research that was later found to be wholly fraudulent. Conversely, they’ve scoffed at work that went on to win Nobel prizes. Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery and a black box. He points out that an extensive body of research finds scant evidence that this vetting process accomplishes much at all. On the other hand, a mountain of scholarship has identified profound deficiencies.
We have known for some time about the random and arbitrary nature of peer reviewing. In 1982, 12 already published papers were assigned fictitious author and institution names before being resubmitted to the same journal 18 to 32 months later. The duplication was noticed in three instances, but the remaining nine papers underwent review by two referees each. Only one paper was deemed worthy of seeing the light of day the second time it was examined by the same journal that had already published it. Lack of originality wasn’t among the concerns raised by the second wave of referees.
A significant part of the problem is that anyone can start a scholarly journal and define peer review however they wish. No minimum standards apply and no enforcement mechanisms ensure that a journal’s publicly described policies are followed. Some editors admit to writing up fake reviews under cover of anonymity rather than going to the trouble of recruiting bona fide referees. Two years ago it emerged that 120 papers containing computer-generated gibberish had survived the peer review process of reputable publishers.
There are serious knock-on effects. Politicians and journalists have long found it convenient to regard peer-reviewed research as de facto sound science. Saying ‘Look at the studies!’ is a convenient way of avoiding argument. But Nature magazine has disclosed how, over a period of 18 months, a team of researchers attempted to correct dozens of substantial errors in nutrition and obesity research. Among these was the claim that the height change in a group of adults averaged nearly three inches (7 cm) over eight weeks.
The team reported that editors ‘seemed unprepared or ill-equipped to investigate, take action, or even respond’. In Kafkaesque fashion, after months of effort culminated in acknowledgement of a gaffe, journals then demanded that the team pay thousands of dollars before a letter calling attention to other people’s mistakes could be published.
Which brings us back to the matter of public policy. We’ve long been assured that reports produced by the UN’s Intergovernmental Panel on Climate Change (IPCC) are authoritative because they rely entirely on peer-reviewed scientific literature. An InterAcademy Council investigation found this claim to be false, but that’s another story. Even if all IPCC source material did meet this threshold, the fact that one academic journal — and there are 25,000 of them — conducted an unspecified and unregulated peer review ritual is no warranty that a paper isn’t total nonsense.
If half of scientific literature ‘may simply be untrue’, then might it be that some of the climate research cited by the IPCC is also untrue? Even raising this question is often seen as being anti-scientific. But science is never settled. The history of scientific progress is the history of one set of assumptions being disproven, and another taking its place. In 1915, Einstein’s theory of relativity undermined Newton’s understanding of the universe. But Einstein said he would not believe in his own theory of relativity until it had been empirically verified.
It was an approach which made quite an impression on the young Karl Popper. ‘Einstein was looking for crucial experiments whose agreement with his predictions would by no means establish his theory,’ he wrote later. ‘While a disagreement, as he was the first to stress, would show his theory to be untenable. This, I felt, was the true scientific attitude.’
Real science invites refutation and never lays claim to having had the final say. As the US National Science Foundation recently pointed out, a scientific finding ‘cannot be regarded as an empirical fact’ unless it has been ‘independently verified’.
Peer review, as we have seen, does not perform that function. Until governments begin authenticating research prior to using it as the foundation for new laws and huge expenditures, don’t fall for the claim that policy X is evidence-based. It’s almost certainly not.