Ep 8: A Rational Approach To Ancient Aliens: Epistemology & Archeology
Welcome back to The UFO Rabbit Hole Podcast. I’m your host, Kelly Chase.
Today we begin a new leg of our journey down the rabbit hole, so I want to take a quick minute to recap where we’ve been and to begin to map out the way forward from here.
So far we’ve established that the Pentagon has admitted that UFOs are real, that they are operating with impunity in our airspace, and that we don’t know what they are. We’ve talked through some of the most common explanations for what the UFO phenomenon might be, considering everything from secret human technology to future humans and from extraterrestrials to ultraterrestrials.
We’ve also been introduced to some — though certainly not all — of the main players in the current disclosure movement including former Director of AATIP, Luis Elizondo; former Deputy Assistant Secretary of Defense for Intelligence, Chris Mellon; investigative journalist, Ross Coulthart; former Blink-182 frontman and founder of the, now defunct To The Stars Academy of Arts & Sciences, Tom Delonge — and many more.
Although we’ve covered a lot of ground, when it comes to understanding the full scope of the UFO phenomenon and its implications for humankind, we’ve barely scratched the surface. And yet, already it’s clear that the branches of ufology stretch out in every direction. Its tendrils are deeply tangled and entwined with questions of human history, consciousness, spirituality, quantum mechanics, and even the paranormal. Basically anywhere that we would look to ask the deeper questions about the nature of our reality and our purpose within it, we find ufology lurking — taunting us with seemingly unanswerable questions.
And so over these next several episodes we’ll begin to get a high-level overview of the many different facets of this phenomenon. We’ll talk about bleeding-edge scientific discoveries, dive deeper into the history of UFOs and what the United States government might be hiding, explore the deep ties of the phenomenon to the paranormal, discuss multiple models of the mind and consciousness, and even look at the mysteries surrounding the origin of the human species and what makes us so unique.
UFOs & The Dawn Of Civilization
Today though, I want to talk specifically about some of the mysteries that exist around the dawn of human civilization — where it happened, when it happened, and why it happened — and what role the UFO phenomenon may have played in shaping who we are today.
As we’ve discussed previously, it’s basically impossible to point to any specific incident or time period and say that that is the definitive beginning of the UFO phenomenon. The reality is that, although it has manifested itself in different ways throughout history, there is a clear pattern of strange objects and beings coming from the sky being reported by people and cultures around the world, dating back to our very earliest written records.
There are lots of explanations for what these reports might represent — from misunderstood weather phenomena to myths and allegories that were never meant to be taken as fact. And because it’s very difficult for us to know exactly what happened hundreds, if not thousands, of years ago, it can be very tempting to just write all of those stories off as figments from a more primitive, less scientific time period.
Evidence Of UFOs & Aliens In The Past
However, with the Pentagon’s admission that UFOs are real, I think there is a strong argument to be made that we no longer have that luxury. Members of our military and witnesses all around the world have reported seeing strange things in the sky that don’t abide by the laws of physics as we know them — why would we assume that this is the only time in human history that this has happened?
In fact, when we examine the hypotheses offered to explain the UFO phenomenon, most of them function in such a way as to strongly imply that this isn’t a new phenomenon.
For example, if what we’re dealing with is Future Humans, then there’s no reason to believe that they’d only come back to this time period. If they’re going back to observe or interact with earlier versions of humans at all, we can assume that this has been happening throughout human history.
If what we’re dealing with is Ultraterrestrial, then whatever this is has been here for a long time — potentially much longer than humans.
If what we’re dealing with is Interdimensional — well then there is almost no limit to who or what is coming through the veil, and limiting humanity’s exposure to these beings to just a few decades is nonsensical. If interdimensional travel is possible at all, it’s almost certainly happening on a scale that we can scarcely comprehend.
So of the most common hypotheses, that really only leaves the extraterrestrial hypothesis. Could beings from another planet have first arrived here about 80ish years ago and have been messing with us ever since? Potentially. If we’re limiting the scope of the phenomenon to the last century, then it’s certainly the hypothesis that makes the most sense.
However, if extraterrestrial visitation is happening now, it doesn’t preclude extraterrestrial visitation in the past. If Earth and humans are interesting to alien intelligences now, it’s likely that they would have been in the past, as well. Though, admittedly, the advent of the nuclear bomb could conceivably have been the thing that put us on their radar. It’s hard to know.
The point being, given what we know of the phenomenon so far, it makes a lot of sense to look to the past for potential clues.
And you know what that means, friends — it’s time for us to do an Ancient Aliens.
What Is The Ancient Astronaut Theory
For those who aren’t familiar, Ancient Aliens is a show on the History Channel that is based on the premise that in our distant past, extraterrestrials came to Earth and interacted with — and sometimes helped — humans.
This theory is called Ancient Astronaut Theory and its proponents see evidence to support their claims in everything from petroglyphs depicting strange beings to ancient megalithic structures that they contend humans could not have built on their own. They see the stories in ancient scriptures as evidence of our ancestors trying to describe events and technologies that they had no words or framework to understand. Basically anything interesting, mysterious, or anomalous in our past has one explanation — aliens did it.
Or at least that’s the main takeaway of the show. Ancient Aliens, by it’s very infotainment nature, is not big on nuance and is notorious for stringing together wild theories and making bold proclamations based on evidence that is dubious at best. It makes for a fun watch if you’re not too concerned with the truthiness of the facts being presented. But for those who are looking for a scientific, rational approach to some of these bigger questions — this isn’t it.
In Defense Of Ancient Aliens
Now I want to come right out and say that I am an Ancient Aliens apologist. There are many in the UFO community who don’t share my views — and I understand why.
For many people, Ancient Aliens is their main — and perhaps their only – exposure to ideas surrounding the UFO phenomenon. I was certainly one of those people before I started down this rabbit hole. And if that’s the main place you’re getting your information, it can be hard to take any of it seriously. It just comes across as totally wackadoo.
But I still go to bat for Ancient Aliens.
In my early days of investigating this phenomenon the deal that I made with myself was that I would look at every piece of evidence and every theory, I would turn over every rock, and I would allow myself to really and truly consider ideas that I’d deemed to be absurd in the past. I didn’t need to accept it. I didn’t need to believe it. I just needed to explore it with an open mind, doing my best to set aside my previous biases and weigh each piece of evidence on its merits.
And being new to the world of UFOs and not knowing where to start, Ancient Aliens was a pretty natural place to begin. And so I started watching. And in the midst of the logical leaps and the hilarious non-sequiturs, there were also some truly astounding and challenging ideas. There were revelations that, frankly, floored me — prompting me to dig deeper.
The ideas that really struck me — the ones that kept me awake puzzling over if this could possibly be true, and contemplating the stunning implications if it were — weren’t the flashy ideas. It wasn’t wild speculation about nuclear wars or alien overlords or high technology in the distant past.
It was the smaller and more grounded details — the physical proof, the anomalies that could be ignored, but not denied, the tantalizing hints that the story of humanity might not be what we think it is.
And I’m grateful to Ancient Aliens for that.
However, it’s also important to recognize that although the ancient astronaut theory is the idea that most people are familiar with, there are actually a few other potential explanations for the mysterious sites and anomalous artifacts from our distant past — and most of them boil down to the idea that human civilization may be much older than we thought.
We’ll get back to that idea in part two and talk through some of the evidence for that potentially being the case. But for now, let’s stick a pin in it. All that’s important at this stage is to recognize that there are a wide range of possibilities when we look at the mysteries of our past — and many of them are just as startling and just as profound as ancient alien visitors.
The Big Question: What Do We Know For Sure About Ancient Human Civilization?
But before we start looking at any particular evidence or explanation, we first need to ask the biggest and most important question — what do we know for sure about ancient human civilization?
After all, whether we’re talking about alien intelligences interacting with and influencing human development or human civilization being older than we think it is — there should be evidence, right? And with the technological advances of the past several decades, archeologists and researchers are equipped with more powerful and precise tools than ever before. So surely there must be some things that we know for sure, which means that, hypothetically, there are things that we should be able to rule out, right?
The answer to that question was meant to be just the intro of this episode, but has ballooned into an episode of its own — and I think it’s important that we spend some time here. Because before we can have an intelligent and informed conversation about what may or may not have happened in humanity’s ancient past, we first need to have the conversation about what we already know and how we know it.
And also, as you’ve probably already guessed, I’m going to advocate for the idea that there is significant evidence that the established narrative about how, when, and why human civilization developed is incorrect — which is not something that I do lightly. I am not an archeologist or a scientist. I’m a content creator who reads too many books. And no matter how many books I read, any decent archeologist has forgotten more about this subject than I am likely to ever know.
So I think it’s important to first lay out the case for why I believe this line of questioning is warranted, and why I feel comfortable making the admittedly audacious claim that the experts may be wrong on this one.
Intro To Epistemology
Before we get into all that, I want to take a minute to talk about a very important concept that we’ll inevitably be revisiting again and again as we continue our journey down the rabbit hole — and that concept is epistemology.
So, for those of you that didn’t waste your parents’ money on ¾ of a philosophy degree, epistemology is the study or theory of the origin, nature, methods, and limits of knowledge. As we continue to explore all these strange ideas and theories, we’ll inevitably find ourselves coming back to the same basic questions:
What do we know to be true? And how do we know for sure that what we accept as true is actually true? Basically, it’s exactly what we’re talking about with regards to early human civilization, what do we know and how do we know it?
These may sound like really obvious questions with equally obvious answers, but the question of what we can know and how we can know that we know it has stumped philosophers — perhaps more than any other — for thousands of years.
Consider for a moment the difference between believing that something is true and knowing that something is true. We know instinctively that those two things are very different, but it can be hard to put your finger on how they are different.
A person can believe that something is true that is actually false. This happens all the time, often when someone is working with incomplete or incorrect information. People can jump to conclusions or put blind trust in a source that isn’t accurate. Sometimes it just a matter of a mistake or a miscommunication. But generally we agree that just because someone believes something, doesn’t necessarily mean that it’s true. And we also recognize that we can become attached to our beliefs in a way that makes us more likely to dismiss or overlook data that doesn’t fit within that framework.
But knowing something is fundamentally different from believing it. You can know your best friend’s phone number. You can know that 2+2=4. You can know a poem word-for-word. You can know your name. And inherent in this idea of “knowing” something is the assumption that this knowledge is both objectively true and verifiable.
Simple, right? Not quite.
We don’t know anything in a vacuum. Every piece of knowledge that we have, is built upon a foundation of other data and assumptions that support and confirm it — even if you never think about it. You don’t just know that 2+2=4. You know what “2” of something is. You can picture it in your mind. You know what a whole number is. You know what the “+” and “=” mean, and you can perform those mathematical functions in your head.
But the number 2, like all numbers, is itself an unprovable abstraction. Using number theory and this shared understanding that we have of what numbers are, we’ve created logic — which itself is the fundamental underpinning of science and language, and basically all of the cool stuff that humans know how to do. But while we can use numbers to do all of that, we can’t use numbers to prove themselves.
You could say that 2=2, but saying that two equals itself is not a meaningful statement in this context – just like if you said “blue is blue”.
Neither is saying that 3-1=2. We can prove that that equation is consistent with the framework of mathematics, but we can’t prove that it is real outside of the framework of mathematics.
Now, I don’t want to get too bogged down here, because this line of questioning inevitably leads to the question of whether we can actually know anything at all. If you drill down far enough into any single thing that we “know” to be true, you inevitably hit conceptual bedrock — or the place where your verifiable knowledge ends and the unverifiable, abstract concepts and assumptions from which they grow begin.
Logician, mathematician, and philosopher Kurt Gödel first expressed this idea with his Incompleteness Theorems in 1931. But you don’t have to be a mathematician or a philosopher to understand the limitations of our knowledge. If you’ve ever spent any time around a young child, you likely already know that each of us is only about 5 “whys” away from a complete existential unraveling.
Why is grass green?
Because it has a bright pigment called chlorophyll that makes it green.
Because plants need it to make food in a process called photosynthesis.
Because that’s how it evolved over billions of years.
God maybe? Or maybe it’s just the random unfolding of events over billions of years. I honestly don’t know, kid, and I don’t have time to be contemplating the absurdity of existence right now. Put your shoes on.
Now, for our purposes, it doesn’t serve us to get lost in navel-gazing questions about whether or not knowledge is even possible. We conduct our lives under the assumption that we can know things, and our observable reality seems to suggest that we can — because our knowledge allows us to do things that we otherwise wouldn’t be able to do like calculate the exact angle of entry for a spaceship returning to Earth so that it doesn’t burn up in the atmosphere.
So if our experience suggests that knowledge is possible, why am I bringing this up?
I bring it up because I think that it’s important to bring a certain level of humility to any conversation about what we can know and what we can’t know. We have very good and very reliable frameworks that allow us to understand complex ideas and relationships, and methodologies that allow us to verify our conclusions against the conclusions of others. But it’s important to remember that these things are tools, and not necessarily a representation of an objective, universal reality.
This is a mistake we make a lot, as humans. Neil deGrasse Tyson has famously said that, “The good thing about science is that it’s true, whether or not you believe in it.” And given the cultural context of the last few years, many champions of science have grabbed onto this statement and have used it as both a rallying cry and as a weapon against those who don’t share their views.
But if you break it down, that statement isn’t just blatantly false, but fundamentally unscientific. Science isn’t “true”. Science isn’t a collection of verified facts. Science is a methodology, and like any methodology it’s as fallible as the people who are utilizing it.
Returning to an example that we’ve used a few times now, let’s consider 2nd century mathematician and astronomer, Ptolemy. His model of the solar system improved upon past models so significantly that it was used to accurately predict the movements of celestial bodies in the sky for over 1400 years — but it was also completely wrong and based on the assumption that the Earth was the center of everything.
To be clear, I’m not saying that we shouldn’t trust scientific conclusions. The scientific method is by far the best way to test our assumptions about our reality in a way that is as free from bias as possible. It’s imperfect, yes, but we used it to put a man on the moon. So it’s still pretty damn amazing.
But what I am saying is that we need to be careful not to turn science into a religion. There are no sacred cows in science. There is no scientific “fact” that can’t be undone with the introduction of a new data point that changes everything.
But that’s a hard thing for humans to remember. We live our lives with the assumption that we can observe and have knowledge of objective reality — which just makes practical sense. It keeps us safe. It creates order where there was none. It’s the basis of civilization itself.
So it can make us super uncomfortable to recognize that our sense of objective reality is so contingent and potentially tenuous. We don’t like to think about the fact that everything that we regard to be true is essentially just a hypothesis that is waiting to be disproven.
We don’t like “I don’t know.” We like answers. We like to believe that those answers are clear. We like to believe that our species has cracked everything from the structure of atoms to the limits of the Universe and that there is nothing left for us to know. And when we have questions about our reality, we increasingly look to scientists to be the arbiters of truth.
But scientists don’t have answers. They have science — a methodology that allows us to approach the truth, but is not truth itself. And it’s not just that their answers can change — given enough time, they will change.
And a pattern that we see repeated again and again throughout history — almost without exception — is that when we get new data that upends our models and invalidates our fundamental understanding of the nature of our reality that data is initially rejected as impossible, ridiculed as pseudoscience, and marginalized along with all those who dare to entertain such heretical viewpoints.
And seemingly no one is totally immune from this bias against data that threatens our worldview. Einstein himself said that the greatest blunder of his career was not believing what the math was telling him about the nature of the Universe — that it wasn’t static and unchanging, but was actually expanding at an accelerating pace. Einstein invented the cosmological constant to normalize his equations — and it wasn’t until Edwin Hubble proved that the light from distant galaxies was red-shifted away from the Earth that he realized he’d been wrong all along.
And I would argue that — as profound as Einstein’s discoveries about the Universe were — the idea that we are not alone, and perhaps never have been, represents the single greatest paradigm shift in human history. The implications are as challenging as they are wide-reaching, and we should expect that it will make people uncomfortable. It will make people angry. It will make people defensive.
Because, to be clear, what we’re talking about is nothing less than rewriting human history as we know it. But sometimes that’s exactly what science requires of us.
Alright, I think we’ve spent enough time in the philosophical weeds. Let’s get to talking about the matter at hand — which is, what do we know for sure about early human civilization?
As we’ll see this isn’t the easiest question to answer, at least in part because archeology itself presents some unique epistemological challenges and complications.
How Do Archeologists Know How Old Something Is?
Let’s start with dating. If you want to understand what happened in the distant past, you need to be able to construct some kind of a narrative. And to construct an accurate narrative, you need to know not just when things happened, but the order in which they happened relative to other things.
So how do researchers figure out how old something is?
There are several different methods depending on what it is that you’re trying to date, and they fall into two main categories: relative and absolute.
Relative Dating Techniques
Relative dating techniques involve establishing a basic timeframe for something by comparing it to other old things — and are therefore far less precise. It’s actually more accurate to think of these methods as “ordering” rather than “dating”.
Before more precise methods of dating were available, relative dating techniques were the main way that researchers attempted to assign dates to things — and although they aren’t as precise, many of these techniques are still used today.
One such method of relative dating is biostratigraphy. As rock and sediment build up over time, they form layers. This means that, in most cases, you can assume that if you find a fossil or an artifact in a certain layer, it’s older than anything found above it.
Within biostratigraphy is a submethod called faunal association. Sometimes researchers can compare things to other fossils that were found in the same layer. If you know how old those fossils are, then you can establish that what you are dating was around at the same time.
You can potentially get more precise dating information by looking specifically at microscopic animal life in the fossil record. Microfauna tend to evolve much faster than larger organisms, so each species exists for a much shorter time in the fossil record, allowing researchers to zero in more precisely on a particular time frame.
Another method of relative dating is paleomagnetism. About every 100,000 to 600,000 years, the Earth’s magnetic poles flip. These changes can be detected by looking at the orientation of magnetic crystals in certain kinds of rock. Now obviously, 100,000 to 600,000 is a pretty long timeframe making this method not very precise, however, it is often used as a check against other methods of dating to help confirm their conclusions.
Researchers can also use a method called tephrochronology. Whenever there is a major volcanic eruption large amounts of dust, rock, and other materials are shot up into the atmosphere where they eventually rain down on the land below. This layer of sediment will have a unique geomagnetic fingerprint. So if you know the date of a particular eruption, you can date things relative to that layer with everything above it occurring after the eruption and everything below it occurring before.
Absolute Dating Techniques
As you can see from these methods, they aren’t particularly precise. However, scientists have also developed several other methods that allow them to get a much more accurate sense of the age of something.
The first, and by far the most common is radiocarbon dating, which involves measuring the quantities of carbon-14. Carbon-14 forms high up in the atmosphere and is then breathed in by plants and breathed out by animals. As a result, you can find carbon-14 in anything that is alive.
Carbon-14 is a radioactive isotope. A radioactive isotope is basically a version of an atom that has a different number of neutrons. Carbon usually has 6 neutrons, but carbon-14 has 8, which makes it both heavier and less stable.
Because it’s less stable, carbon-14 breaks down over time, with one of its neutrons splitting into a proton and an electron. The electron then escapes, while the proton stays, and with one fewer electron and one more proton, carbon-14 decays into nitrogen.
This process begins as soon as something dies and stops taking in carbon-14, but the process of radioactive decay is very slow. It’s half-life — or the amount of time it takes for half of a given quantity of carbon-14 to decay — is 5,730 years. Scientists are then able to use the amount of carbon-14 left in organic materials, like bones or plant fibers or ashes from a campfire, to figure out how old something is.
However, there are limitations to radiocarbon dating. The first is that after a period of about 50,000 years, organic materials have lost more than 99% of their carbon-14, which means that we can only use it to date things from the last 40,000 years or so. The other limitation is that it only works on organic matter, so you can’t use it on things like rock, metal, or other minerals.
Single Crystal Fusion & Uranium Series Dating
To date rock, researchers usually turn to either single crystal fusion or uranium series dating. Without getting too in the weeds, like radiocarbon dating, both of these dating techniques involve using the decay rates of various isotopes to determine how old something is. Conveniently, while uranium series dating only works for things that are 40,000 to 500,000 years old, single crystal fusion works best on things that are 500,000 years old or older — and actually gets more accurate over time.
Trapped Charge Dating
For materials like teeth and coral that are especially good at trapping electrons from the sun and cosmic rays, researchers can use a technique called trapped charge dating. This can be a complicated process that involves looking at multiple variables, including the amount of radiation that the object was exposed to each year, to calculate the rate at which electrons were trapped. And to make things more complicated, it’s only accurate for things that are less than 100,000 years old.
Besides teeth and coral, certain silicate rocks, like quartz, are also very good at trapping electrons. For researchers who are specifically working with prehistoric tools made of flint — which is a hardened form of quartz — thermoluminescence can help them determine the age of the tool itself — not just the materials it was made from. The last stage in the process of making these tools was usually to drop them into a fire, which frees all of the electrons, essentially resetting the clock for that object. This allows researchers to determine how long ago it was made.
Optically Stimulated Luminescence
For things that have been buried for a long time, researchers can use Optically Stimulated Luminescence in a process that allows them to determine, not when something was made, but how long it’s been since it was last exposed to sunlight.
Good Dates, Bad Dates & Ugly Dates
There are a few other methods of dating, but I think you get the idea. You don’t need to know every method of dating or how it works. What’s more important is to understand the challenges that researchers encounter when they are trying to date an archeological find.
Each method has its strengths and its limitations. And because of these limitations, researchers generally strive to date things as many different ways as possible to help them zero in on the correct date. The result is that there is a lot of variability in how accurate dating of these objects can be.
If something is dated using at least two different methods and is verified by multiple independent labs who all point to the timeframe, you can have a pretty high degree of confidence that that date is legit. If something is only dated using one method or if it isn’t independently verified, that date may be less reliable.
Challenges Dating Megalithic Structures
And then there is the case of the things that you can’t date accurately with any of the above methods — and maddeningly, this applies to a category of archeological treasures that we’d most like to date — which is basically any structure made out of stone.
Thousands of years ago, cultures around the world erected massive stone monuments, temples, and other structures out of enormous blocks of stones referred to as megaliths. Many of these megalithic structures remain largely intact today, even as the civilizations and cultures that built them have been lost to the sands of time.
Their stunning architecture stirs our sense of wonder. Their confounding size pricks at the corners of our imagination. Their mysteries call to us, and there is this uncanny sense that if only we could unravel their secrets, we might gain some special insight or knowledge that somehow has been lost.
They’re also extremely difficult to date.
We can tell when the stone was formed, geologically. We can make some assumptions about what kinds of tools would have been needed to build these structures that can help us pin it to a particular time period. We can compare it stylistically to other art, architecture, and writing that we are able to date and draw some conclusions that way. And we can often date organic and other materials found in and around the site.
But all of these are indirect and imprecise ways of arriving at a date. And if you’re working with the wrong set of assumptions to begin with, you could misdate something entirely. It’s a messy business.
For instance, you could find a crypt with bones in it and date the bones to a certain time period, and then assume that the construction of that crypt was more or less contemporaneous to the people who were buried there. But what if it had been in use for thousands of years before that? What if it had changed hands a few times due to war or even just the passage of time?
The most educated guess in the world is still just a guess without anything to confirm it.
Many megalithic structures are built on top of earlier structures.
Another challenge in dating megalithic structures — or really any structure built in antiquity — is that cultures around the world have shown a tendency to rebuild on top of an existing structure or foundation. You see this especially in places that have a religious or spiritual significance.
There’s perhaps no place on Earth where we see this occurring more than in Jerusalem, where 3 different monotheistic religions — Judaism, Christianity, and Islam — have some of their most sacred sites. And in some cases, these sites are literally right on top of each other.
The Islamic holy site The Dome of the Rock sits on Temple Mount in Jerusalem over a spot that is believed to be both the place where the God of the Old Testament asked Abraham to sacrifice his son and the spot from which Muhammad ascended into heaven. The Dome of the Rock is built on the former site of the Second Jewish Temple, which was first built in 516 BC, which in turn was said to have been built on the foundation of Solomon’s Temple which was said to be completed around 957 BC. The Western Wall is believed to be a remnant of the second temple.
So in this one spot you see layers upon layers upon layers of history. And this creates complications. Obviously, we can’t really go digging around these sites to try to learn more about what’s under them. Beyond their profound significance to billions of people around the world, they are archeological treasures in their own right. Doing anything that could potentially damage these structures is unthinkable.
And because we aren’t able to study what lies underneath these sites, the evidence for any older structures that may have come before is lost to us — their mysteries and meaning swept away by the sands of time.
Megalithic structures can last for thousands, if not tens of thousands, of years.
Another thing that makes megalithic structures particularly difficult to date is that they are built to last for a really long time — far longer than anything else built by humans.
To help put that into perspective, you can think about it this way: If every single last human alive disappeared from the planet tomorrow, within just 1,000 years almost all traces of our civilization will have been eroded, buried, or otherwise swallowed by nature. Even Manhattan will once again be a lush, green island — much like it was when it was first explored by Henry Hudson in 1609.
Some signs of humanity will still remain though. The Washington Monument will likely still be standing, though it would likely be underwater. The walls of Notre Dame in Paris may still be recognizable. Stonehenge will still stand
But if you zoom out to 10,000 years after humans, what would remain then?
The Great Wall of China will have eroded tremendously, though it will still be recognizable. The pyramids and the sphinx will likely be long gone, unless they are buried by the Sahara. Only our largest stone structures would remain — like the remnants of the Hoover Dam. The faces on Mount Rushmore, however, could survive in a recognizable form for millenia.
This is because granite, one of the hardest kinds of rock and a popular megalithic building material, erodes at only about one inch every 10,000 years. So when we’re talking about large structures, built with granite (or similarly durable) blocks of stone, we’re talking about something that has the ability to exist and endure outside of the normal human time scale. In the time it would take for New York City to become a forest again, a megalith will have undergone a bit of weathering.
When we’re talking about structures that have the ability to endure for thousands of years, if not tens of thousands of years, and without any direct way to confirm when they were built, it’s entirely possible that our understanding of how old they are is entirely wrong. And if, as some archeologists suggest, some of these megalithic structures are older than we previously thought — what might that mean for the story of humanity? And what might we discover about ourselves as we begin to reimagine the events of our distant past?
Other Epistemological Challenges Of Archeology
And, beyond challenges with dating the most interesting and mysterious remnants of our ancient past, archeology itself presents some additional epistemological challenges as a result of the unique way that data is collected in this field of science.
Archeology often involves rare discoveries that can’t be replicated at will.
One of the most important aspects of validating scientific conclusions is the ability to replicate results. However, archeological discoveries tend to be rare — and they only get more rare the farther back in time you go. Sometimes there’s very little to compare a site to, which makes it even more challenging to draw conclusions.
The act of discovery can destroy most — if not all — of the key evidence.
For example, if you have to dig to uncover a site, as you’re digging it up, you’re effectively destroying all of the evidence of the site’s stratigraphic positioning. Archeologists are trained in how to conduct this sort of excavation and carefully document every stage to preserve as much information about the original condition of the site as possible.
Knowledge of key elements is entirely dependent upon the observers who were present.
No matter how careful someone is, it would be impossible for anyone to record absolutely everything about a site as it existed in its original condition. Any small detail that is overlooked could potentially have huge implications for the conclusions that are later drawn about a particular site, but once it’s been excavated that evidence is destroyed forever.
Interpreting an archeological site is never simple.
On any given day you could take any two top archeologists, plop them down at the same site and ask them their conclusions and get two different answers. Coming to a consensus is often difficult, and made more so by the fact that the only data available is that which was collected by those who were on the site at the time of the dig.
There are incentives for people to lie.
We’d like to think that we can always depend upon a scientist to be objective and truthful in their conclusions, but the reality is that scientists are subject to the same conscious and unconscious biases — and the same temptations — as the rest of us.
While there are a few archeologists who manage to make multiple important discoveries in their careers, the reality is that many brilliant and talented archeologists toil for their entire careers without ever personally finding anything of import. Beyond just earning the respect of their peers, making an important discovery means more money for research and excavations, better positions at better universities, and even potentially book and media opportunities. It’s not hard to understand why someone might leave out information that might contradict the importance of their find or might embellish slightly to make something seem more important that it was.
And I’d argue that there is the opposite pressure, as well — to gloss over or overlook anomalous details that contradict earlier findings and conclusions in the field. As with anything else, archeology has a deeply enmeshed established narrative regarding the history of humanity — and those who choose to challenge the status quo are often met with hostility and derision. They can be labeled as pseudoscientists, if not outright con artists. And for people who are slapped with that label, the fallout can be personally and professionally devastating. The research money and cushy positions dry up. They become outsiders.
Flaws With The Peer Review System
Which brings us to another epistemological challenge that impacts not just archeology, but the practice of science as a whole — which is that there are flaws in the peer review system that can make it difficult for particularly challenging or unpopular ideas to get the same consideration as ideas that more closely align with the established narrative.
If you’re unfamiliar, peer review is the system used to assess the quality of a manuscript before it is published in a scientific journal. Independent researchers in the relevant research area assess submitted manuscripts for originality, validity and significance to help editors determine whether a manuscript should be published in their journal.
Basically, it’s a way for other experts in the field to look at how the research being presented was conducted and to assess the conclusions that were drawn and to give an opinion on how legit it is. To get published scientists basically need to have a group of their peers look over their work and say, “This looks right to me.” And peer review isn’t just used to determine which articles are published, but to determine who receives scientific grants, which projects are funded, and which scientists are hired and promoted.
And that makes sense, right? We need some sort of system of checks and balances in science. We need a way to sort out the experiments that were poorly constructed and the conclusions that were unsound. We need a way of coming to consensus.
And don’t get me wrong, for all its flaws, the peer review system is the best answer we have for that, by far. It’s like democracy — it’s a messy, imperfect system, but we have yet to come up with anything better. So please don’t hear what I’m about to say as an attack on the peer review system or a suggestion that it should be abandoned. But I do think it’s important to understand what its weaknesses are, and how they can become particularly evident in the case of scientific findings that radically alter our existing beliefs.
So what are those limitations?
Reviewers are usually untrained, unpaid, and overworked.
Now granted, “untrained” means something very different when you’re talking about a scientist with one or more terminal degrees. The people who are conducting peer review are highly trained in both their field and in scientific methodologies as a whole.
However, they aren’t generally trained on the peer review process itself. And, as we’ll see, there are certain mistakes and biases that can impact what studies get published and which ones don’t — which in turn impacts which scientific findings are given credibility and which ones are dismissed. So a little training on how to check yourself for those biases throughout the process seems warranted — but it’s not something that most peer reviewers ever receive.
And with 2.5 million peer-reviewed articles being published annually — a number which does not take into account all the papers that are rejected — it’s difficult to find enough people to do the work. Add in the fact that they are generally unpaid for their labor, and the result is a pool of peer reviewers who are well-meaning, but almost certainly overworked.
There is evidence that reviewers aren’t always consistent.
In a 1982 study, two researchers selected 12 articles that were already accepted by well-respected scientific journals and switched the names of the authors and academic institutions to fake ones. They then resubmitted the exact same articles to the same journals that had already accepted them in the previous 18 to 32 months. Surprisingly, only 3 of the papers were identified by the editors and reviewers of these journals as being an article they’d already published. And of the 9 that continued on, all but one was turned down with 89% of the reviewers recommending rejection.
Obviously, this lack of consistency suggests that there are factors beyond just the black-and-white scientific merits of an article that contributes to whether or not it’s published.
There is evidence that the most innovative and impactful scientific ideas are more likely to be rejected through the process of peer review.
A 2015 study tracked the popularity of rejected and accepted manuscripts at three top medical journals found that, while the editors and reviewers generally made good decisions regarding which manuscripts to publish and which to reject, some of the most highly cited articles were the most likely to be rejected.
They started with 1,008 manuscripts, 808 (or about 80%) of which were eventually published. However, among those that were rejected were ALL of the 14 most cited articles. This suggests that while the peer review system is great for raising the overall average quality of articles that are published, it isn’t very good at recognizing and promoting the most important and impactful research. Which makes sense — if you want to smooth out the mean of any data set, you first need to eliminated the outliers.
But sometimes, the outlier is the data point that changes everything. What then?
So the takeaway here is this — the peer review system is a necessary and well-proven process for maintaining an overall level of quality and scientific rigor with regard to the scientific articles that are published. However, the system is not without its flaws, and has a general bias toward the status quo and against more innovative or radical ideas.
Putting It All Together
So putting it all together, what does all of this mean for us?
The Dunning-Krueger Effect
What it doesn’t mean — and I want to be very clear about this — is that we should just throw everything out the window, or assume that we are as good as or better than archeologists at interpreting data in their field. We absolutely are not.
Right now, we’re feeling smug and full of facts. We know about the challenges of dating archeological sites. We know that most archeologists only ever get to assess second- or third- hand data. We understand the biases inherent to the peer review process. We’re feeling very smart — and who doesn’t love feeling smart?
But everything that we’ve talked about in the last few minutes wouldn’t even fill one chapter of a 101 textbook. It’s not a drop in a bucket, it’s a drop in an ocean compared to the knowledge of people who have spent decades of their lives dedicated to studying these things.
When we find ourselves in this place, the Dunning-Kruger Effect is a good touchstone to keep us grounded. If you’re unfamiliar, the Dunning-Kruger Effect is an extremely common form of cognitive bias where people who have very limited knowledge of something greatly overestimate how much they know about that thing.
It’s basically a case of not knowing what you don’t know. If you only know a little bit about something, you can easily make the mistake of assuming that what you know is basically all there is to know. And interestingly, once people start to learn more about a subject, and become more aware of its true scope and complexity, they very quickly go from overestimating their knowledge to realizing that they know next to nothing.
So basically, if you haven’t studied something deeply enough to have been truly humbled by it, you probably know much, much less about it than you think.
Is there a conspiracy in academia to cover up the true origins of human civilization?
I also want to be clear that, although I do think there is more than enough evidence for us to question the established narrative about the age and origins of human civilization, and that the mechanisms and institutions of academia have been suppressing this knowledge — I absolutely do not believe that there is a conspiracy within academia to cover any of this up.
Because I don’t think that there needs to be. When I look at all of these things together, I don’t see a plot. I don’t see sinister machinations. What I see is our humanity, in all its nobility and absurdity.
We want to belong, so we shift and mold our beliefs to align with the whole. Our need to know who we are is so profound that we tell ourselves that we already do just to ease the existential friction. We mistake the limits of our knowledge for the limits of what is. We strive. We falter. We’re wrong. We rise and try again.
It’s just who we are.
What is the difference between science and pseudoscience?
And finally, I want to take a minute to talk about a word that gets thrown around a lot when we’re talking about scientific ideas that threaten the established narrative — and that word is pseudoscience. And for any scientist who hopes to maintain the credibility necessary to have a career, much less get money to fund their research, it’s the ultimate kiss of death.
But what is pseudoscience?
Pseudoscience is a collection of beliefs or practices mistakenly regarded as being based on scientific method.
So if a person is using the scientific method to define a question, make predictions, gather and analyze data, and then draw conclusions — that’s science. If a person is not using the scientific method — that’s not science. And if someone is doing research and drawing conclusions in a way that they think is in alignment with the scientific method but actually isn’t — that’s pseudoscience.
Pretty straightforward, right?
And I come back to this idea again and again, because it’s in this distinction that we can begin to get a solid foothold to help us evaluate whether or not ideas that have been relegated to the realm of pseudoscience have actual merit. And best of all, we don’t need to be experts in anything to do so.
It’s as simple as this: is this idea being called pseudoscience because the approach is unscientific or because the conclusions don’t conform to the established narrative?
If it’s unscientific that should be easy to demonstrate because either the scientific method wasn’t used or it wasn’t used correctly. Maybe a variable wasn’t accounted for. Maybe there was a flaw in the way that the data was collected. There are lots of possibilities, and any true expert in a field that is leveling an accusation of pseudoscience against a colleague should be able to clearly articulate their basis for saying so.
But what’s been kind of astonishing to me as I’ve pursued this line of questioning is how often accusations of pseudoscience mention none of those things and focus instead on the fact that the conclusions being drawn by this person are “impossible.”
But how many impossible things has science proven to be possible? “Impossible” is a meaningless term for a species as young as our own. What could we possibly know about what’s impossible?
Impossible in this context means that it doesn’t conform to the conclusions drawn based on the existing data set. That doesn’t mean it’s wrong. It just means that it doesn’t fit. And in science we don’t shape our data to conform to the conclusions — we shape our conclusions based on the data. The fact that a data point doesn’t fit isn’t grounds to throw it out — it’s a hint that there may be a piece of the puzzle that we’re missing?
And so while we may not be archeologists or scientists, I would argue that we are at least able to determine whether a set of ideas that is labeled as pseudoscience has been labeled so fairly. That doesn’t mean that it’s then necessarily true, only that we should wait to reject it until we’re able to do so on its scientific merits.
And that, at least, gives us a place to start.
Where do we go from here?
So, as we conclude this introduction that became an entire episode, I feel like we have a solid foundation to go forward from here and begin to explore some of the mysteries surrounding the dawn of human civilization.
We recognize the necessity of reevaluating the story of humanity through the lens of the UFO phenomenon. We understand some of the challenges that exist in creating a reliable and consistent narrative around events of the distant past. We know that although the peer review system helps ensure that a high level of scientific rigor and excellence is maintained in the work that is published, it also has a tendency to reject the work that is most important and impactful. And finally, we have a simple framework to help us assess whether radical and innovative ideas are being dismissed as pseudoscience based on their merits or if other biases may be at work.
And that’s where we’ll pick up next time, as we sift through the sands of time looking for answers and begin to explore some of the most astonishing evidence that suggests that the history of human civilization on this planet may be far older and more dazzling than we ever imagined.
I’ll see you then.