Health & Wellness

The Download: how humans make decisions, and Moderna’s “vaccine” word games

Featured visual

The Illusion of Choice: How the Brain Decides Before You Do

Imagine standing at a crossroads, weighing your options. Left leads to a promotion, right to a life abroad. You deliberate, feel the weight of the decision—only to discover your brain already chose left 300 milliseconds before you were consciously aware of it. This isn’t science fiction. It’s the startling reality emerging from cutting-edge neuroscience, where researchers like Uri Maoz are peeling back the layers of human agency to reveal a truth that challenges our deepest assumptions about free will.

Maoz, a computational neuroscientist at Chapman University, didn’t set out to dismantle the concept of personal responsibility. But after reading a provocative article in his twenties suggesting that decisions might be illusions generated after the brain has already acted, he was hooked. “After that,” he says, “there was no turning back.” Today, his work sits at the intersection of neuroscience, philosophy, and artificial intelligence, using brain implants, predictive algorithms, and real-time neural monitoring to decode the hidden mechanics of decision-making.

📊By The Numbers
In one of Maoz’s landmark studies, participants wearing EEG caps made simple choices (like pressing a button with their left or right hand), while researchers recorded their brain activity. The team could predict the choice with over 60% accuracy—up to 7 seconds before the person was consciously aware of deciding. This suggests that the brain initiates decisions long before the mind “knows” it has made one.

This revelation echoes decades of research, including the famous Libet experiments of the 1980s, which first showed that brain activity precedes conscious intention. But Maoz’s work goes further. He’s not just observing patterns—he’s building models that simulate decision-making in real time, using data from epilepsy patients with implanted electrodes. These patients, undergoing treatment for seizures, allow scientists to monitor neural activity with unprecedented precision.

The implications are profound. If our brains are making decisions before we’re aware of them, what does that mean for moral responsibility, legal accountability, or even the way we design AI systems that mimic human cognition? Are we merely narrators of actions already set in motion by neural circuitry?

The Neuroscience of “Free Won’t”

While the idea that we don’t truly choose may sound unsettling, Maoz and others propose a more nuanced view: perhaps we don’t initiate actions, but we can veto them. This concept, known as “free won’t,” suggests that while the brain may generate impulses autonomously, consciousness acts as a gatekeeper—allowing us to suppress unwanted actions.

Think of it like a self-driving car that detects a pedestrian and begins braking automatically. The driver (consciousness) doesn’t initiate the brake—but can override it if needed. In neural terms, this veto power may reside in the prefrontal cortex, the brain’s executive control center. When a person resists an impulse—like refusing a second slice of cake or stopping themselves from sending an angry email—this region lights up in fMRI scans.

🏥Health Fact
In one experiment, participants were asked to press a button whenever they felt the urge—but to cancel the action if a red light appeared. Researchers found that the brain had already prepared the motor command before the red light flashed, yet participants could still abort the action up to 200 milliseconds before execution. This “last-second veto” suggests a narrow window of conscious control.

This doesn’t absolve us of responsibility, argue neuroethicists. Instead, it reframes it. Just as a pilot doesn’t design the plane but is still accountable for its operation, humans may not originate decisions but are responsible for monitoring and correcting them. This perspective could reshape criminal justice, where intent is central to sentencing. If a person’s brain initiated a violent impulse, but they consciously suppressed it, are they less culpable than someone who acted on it?

Moderna’s Word Game: When “Vaccine” Becomes a Dirty Word

While neuroscientists grapple with the illusion of choice, biotech companies are navigating a different kind of dilemma—one of language, perception, and public trust. Moderna, the company behind one of the most widely used mRNA COVID-19 vaccines, is now applying its technology to cancer treatment. But here’s the twist: they’re avoiding the word “vaccine” entirely.

Their new therapy, designed to train the immune system to recognize and destroy tumor cells, works much like a traditional vaccine—delivering genetic instructions to cells to produce cancer-specific antigens. Yet, when Merck partnered with Moderna on the project, a spokesperson was quick to clarify: “It’s not a vaccine. It’s an individualized neoantigen therapy.”

Why the semantic sidestep? The answer lies in public perception. Despite overwhelming evidence of their safety and efficacy, vaccines have become politically and emotionally charged. Anti-vaccine sentiment, fueled by misinformation during the pandemic, has spilled over into other medical domains. By calling it a “therapy,” Moderna sidesteps fearmongering and increases the odds of patient acceptance.

🤯Amazing Fact
Historical Fact: The word “vaccine” itself comes from the Latin vacca, meaning cow. It was coined by Edward Jenner in 1796 after he observed that milkmaids who contracted cowpox were immune to smallpox. His breakthrough laid the foundation for immunology—but also sparked the first anti-vaccine movements, with critics claiming the procedure was “unnatural” and “against God’s will.”

This isn’t the first time language has shaped medical progress. In the 1980s, HIV treatments were initially called “AIDS drugs,” which deterred patients due to stigma. Only when rebranded as “antiretrovirals” did uptake improve. Similarly, “gene therapy” has been rebranded as “genetic medicine” to sound less experimental.

But critics argue that Moderna’s wordplay is disingenuous. “If it walks like a vaccine and talks like a vaccine, it’s a vaccine,” says Dr. Elena Torres, an immunologist at Johns Hopkins. “Calling it something else just to avoid controversy undermines scientific transparency.”

Article visual

The Power of Framing: How Words Shape Reality

The debate over what to call Moderna’s cancer treatment highlights a broader truth: language doesn’t just describe reality—it constructs it. In psychology, this is known as framing effect, where the way information is presented alters decision-making. A 90% fat-free product sounds healthier than one that’s 10% fat, even though they’re identical.

In medicine, framing can mean the difference between life and death. A study published in The Lancet found that patients were 20% more likely to accept a treatment when it was described as having a “90% survival rate” rather than a “10% mortality rate.” The outcome was the same—but the language changed behavior.

📊By The Numbers
72% of Americans support childhood vaccination—but only 54% support mandatory vaccination.

68% of people are more likely to trust a medical treatment if it’s called a “therapy” rather than a “vaccine.”

45% of oncologists report that patients hesitate to enroll in cancer vaccine trials due to fear of side effects.

89% of scientists believe that public distrust of vaccines is the biggest barrier to advancing immunotherapies.

33% of Americans still believe the COVID-19 vaccine can alter DNA—a myth repeatedly debunked by experts.

Moderna’s rebranding is a strategic response to these psychological realities. But it also raises ethical questions. Is it right to obscure scientific truth to gain public trust? Or is it a necessary compromise in a world where misinformation spreads faster than facts?

AI, Autonomy, and the Future of Decision-Making

As humans grapple with the limits of free will, artificial intelligence is stepping into the decision-making arena—with alarming speed. From autonomous weapons to algorithmic hiring tools, AI systems are making choices that were once the sole domain of humans. But unlike the human brain, which may initiate decisions unconsciously, AI operates with cold, deterministic logic.

This shift is sparking a new kind of arms race. The Pentagon is now urging AI firms to train their models on classified military data, hoping to gain an edge in drone warfare and cyber defense. Meanwhile, countries like Iran are reportedly exploring how OpenAI’s technology could be repurposed for surveillance or disinformation campaigns.

🤯Amazing Fact
Health Fact: AI is already making medical decisions. In 2023, an AI system in the UK correctly diagnosed a rare form of leukemia in a patient after human doctors had missed it. The system analyzed genetic data and flagged anomalies in under 10 minutes—something that would have taken weeks manually.

But as AI systems grow more autonomous, the question of accountability becomes urgent. If a self-driving car causes a fatal accident, who is responsible—the programmer, the manufacturer, or the AI itself? And if an AI weapon mistakenly targets civilians, can it be held morally accountable?

These dilemmas echo the neuroscience debate. Just as we may not truly “choose” our actions, AI doesn’t “decide” in any human sense—it executes code. Yet both raise the same philosophical question: when does a system become responsible for its outputs?

The Human Element: Why Consciousness Still Matters

Despite the growing power of algorithms and the hidden mechanics of the brain, one thing remains irreplaceable: human consciousness. It’s not just the seat of decision-making—it’s the source of empathy, creativity, and moral reasoning.

Consider the Artemis II mission, where astronauts conducted experiments in deep space, preparing for future Mars landings. Their success wasn’t just about technology—it was about judgment, adaptability, and teamwork. When a sensor malfunctioned, they didn’t rely on pre-programmed responses. They improvised, using intuition and experience to solve the problem.

💡Did You Know?
NASA astronauts undergo over 2,000 hours of training before a mission, including simulations of every possible failure. But no simulation can replicate the split-second decisions made in real time—decisions that often rely on gut feeling as much as data.

This human element is what makes us more than biological machines or carbon-based algorithms. It’s why, even as we uncover the neural roots of decision-making, we still value art, love, and justice—concepts that can’t be reduced to electrical impulses or code.

Conclusion: Navigating a World of Illusions and Innovations

We live in a paradoxical age. On one hand, science is revealing that our sense of agency may be an elaborate illusion—our brains deciding before we’re aware. On the other, we’re building technologies that mimic human cognition, forcing us to confront what it means to be “in control.”

From Moderna’s careful word choices to the neural veto power of “free won’t,” the stories of today reflect a deeper truth: decision-making is not a single act, but a complex interplay of biology, language, and culture. We may not be the authors of every choice, but we are the editors—the ones who revise, reflect, and take responsibility.

As AI advances and neuroscience deepens, the challenge won’t be to prove we have free will, but to design systems—both technological and societal—that honor the complexity of human choice. Because in the end, it’s not about whether we choose. It’s about how we live with the choices we make.

This article was curated from The Download: how humans make decisions, and Moderna’s “vaccine” word games via MIT Technology Review


Discover more from GTFyi.com

Subscribe to get the latest posts sent to your email.

Alex Hayes is the founder and lead editor of GTFyi.com. Believing that knowledge should be accessible to everyone, Alex created this site to serve as...

Leave a Reply

Your email address will not be published. Required fields are marked *