See sample Peer-review report

The Applied Science

The Applied ScienceThe Applied ScienceThe Applied Science
Home
Our Services
  • Request Service
  • R&D Pilot
  • Execution Support
  • Peer Review
Our Experts
  • Meet the Experts
  • Become Our Expert
Contact Us
Applied Science Blog

The Applied Science

The Applied ScienceThe Applied ScienceThe Applied Science
Home
Our Services
  • Request Service
  • R&D Pilot
  • Execution Support
  • Peer Review
Our Experts
  • Meet the Experts
  • Become Our Expert
Contact Us
Applied Science Blog
More
  • Home
  • Our Services
    • Request Service
    • R&D Pilot
    • Execution Support
    • Peer Review
  • Our Experts
    • Meet the Experts
    • Become Our Expert
  • Contact Us
  • Applied Science Blog
  • Sign In
  • Create Account

  • My Account
  • Signed in as:

  • filler@godaddy.com


  • My Account
  • Sign out

Signed in as:

filler@godaddy.com

  • Home
  • Our Services
    • Request Service
    • R&D Pilot
    • Execution Support
    • Peer Review
  • Our Experts
    • Meet the Experts
    • Become Our Expert
  • Contact Us
  • Applied Science Blog

Account

  • My Account
  • Sign out

  • Sign In
  • My Account

The Applied Science Blog

Where Science Meets Product Innovation

How Peer Review Reduces Costly Design Errors

Engineer presenting mechanical system design concept during technical peer review meeting in medtech

 In science, we peer review. In engineering, we prototype and test. But somewhere between rapid innovation and commercial urgency, many teams quietly skip the most statistically effective quality control in product development: rigorous, ongoing peer review.

When it comes to medical devices and scientific instrumentation, the data is unambiguous. Peer review—when conducted routinely and deeply—cuts defect rates by over 80%, increases productivity, accelerates delivery, and saves millions in rework and recalls. Yet in today’s development culture, peer review is often reduced to a milestone checkbox—or bypassed altogether.

The cost of that omission is far from theoretical. It's measurable in billions of dollars lost, failed product launches, and sometimes—tragically—lives.

By the Numbers: Why Peer Review Pays

Multiple studies show that the earlier a defect is caught, the cheaper it is to fix. According to IBM’s Systems Science Institute, a design defect costs $1 to fix in early development, $10 in testing, and over $100 once the product is in the field (IBM, 2008).

Peer reviews are particularly effective at catching these early defects. On average, formal design or code inspections identify 55–60% of all defects, while even the best testing methods catch only 25–45% (Fagan, 1986; SmartBear, 2017). That’s why companies that implement regular peer review see dramatic improvements:


  • AT&T reported a 90% reduction in software defects and a 14% increase in productivity after introducing structured peer reviews. 
  • IBM’s Orbit project (500,000 lines of code) used 11 layers of peer review and delivered ahead of schedule, with only ~1% of expected defect density. 
  • NASA’s Jet Propulsion Lab estimates each peer review saves ~$25,000 by preventing downstream errors.
     

In short: peer review doesn’t slow you down—it prevents the things that do.


How Often Do Companies Actually Peer Review?


 Despite the clear value, most teams don’t implement peer review as a deep, continuous practice:

  • Only 21% of teams conduct daily peer reviews; 27% review weekly.
  • 68% rely on informal, infrequent checks—often just once a month. 
  • A full 26% of teams report doing little to no peer review at all (SmartBear, 2017).

Peer review in medical device design: reduce costs, defects, and recalls with early validation

Peer review in medical device design: reduce costs, defects, and recalls with early validation 

Theranos Edison device, nanotainer, and Holmes at TechCrunch — the promise that collapsed

Theranos: A $724 Million Case of Skipped Validation — What Really Went Wrong

Few stories illustrate the cost of bypassing peer validation more starkly than Theranos. Once valued at $9 billion, the company raised $724 million on claims it could run hundreds of blood tests from a single finger prick. But its core device, the Edison, was never independently validated — nor peer-reviewed.


What Went Wrong?


No independent feasibility or validation studies: Theranos never published data comparing Edison’s performance to gold-standard methods like ELISA or LC-MS/MS. Basic metrics like limit of detection (LOD), coefficient of variation (CV), and cross-reactivity were never peer-reviewed or externally reproduced.

💡 In contrast, a peer-reviewed feasibility study would typically involve comparing device output to gold-standard methods like ELISA, LC-MS/MS, or hematology analyzers across multiple sample cohorts, including real-world patient samples with known comorbidities. None of this was ever done.
 

Design flaws persisted unchecked or ignored: The Edison suffered from reagent flow issues, sample instability, and unreliable heating. Internal teams reported false potassium, sodium, and hemoglobin results — prompting secret substitution with Siemens analyzers. A proper peer review would have flagged these as critical failures. Scientists who raised concerns were ignored or silenced. The company avoided internal design reviews and blocked external data sharing — citing intellectual property, not scientific rigor.

💡 In contrast,  properly conducted peer reviews would have involved a multi-disciplinary board: analytical chemists, hematologists, mechanical engineers, and quality/regulatory leads. Such a group might have halted use of the Edison for any clinical-facing application until metrological traceability and reliability were confirmed. In regulated medtech firms, this is enforced through Design Review Boards (DRBs) under ISO 13485 and FDA QSR.
 

Regulatory and investor oversight failed: Despite huge funding, no investor demanded technical due diligence. The FDA saw only curated subsets of pilot data. When the truth surfaced, major investors lost hundreds of millions: Rupert Murdoch ($100M), the Walton family ($125M), Betsy DeVos’s office ($96M). Walgreens spent $140M building test centers that were never used.

💡 In contrast,  normal development cycles, prototype-level instruments are published in proof-of-concept journals (e.g., Lab on a Chip, Analytical Chemistry) before reaching clinical stages. Peer review at this phase helps identify signal stability, microfluidic channel blockage issues, material leaching from cartridges, and mechanical robustness of actuators or heaters. By avoiding these steps, design weaknesses—such as unstable sample heating and inconsistent reagent delivery in the Edison—persisted until late-stage rollout. 


Damage of skipping regular process


Investors, despite putting in hundreds of millions, never demanded independent technical due diligence—a common pitfall when investor excitement outpaces technical skepticism. The FDA was only shown limited data, and even those were selectively curated pilot study results rather than full V&V packages.


📉 According to SEC filings and court records, when Theranos collapsed:

  • Investors lost nearly $600 million collectively, including $100 million from Rupert Murdoch, $125 million from the Walton family, and $96 million from Betsy DeVos’s family office. 
  • Walgreens spent over $140 million building testing centers and later filed suit. 
  • Elizabeth Holmes and Sunny Balwani were charged with fraud, and faced combined penalties of up to 20 years in prison.
     

All of this could have been prevented if even a single phase-gated, cross-functional design review had been performed by independent experts.

ROV view of Titan submersible debris at ~3774 m depth, Coast Guard MBI public domain

OceanGate’s Titan: Ignored Warnings, Experimental Materials, and the Fatal Cost of Skipping Peer Review

In June 2023, the Titan submersible imploded catastrophically during a dive to the Titanic wreck, killing all five people on board.

The implosion occurred in less than 20 milliseconds, destroying the vessel before the crew could react. While media headlines focused on the tragedy, the engineering world recognized a deeper systemic failure: the rejection of independent peer validation and fundamental design review processes.

OceanGate, a private deep-sea exploration company, touted innovation — but sidestepped industry norms. The Titan was designed to reach depths of ~3,800 meters, where pressures exceed 5,500 psi. To withstand such extreme environments, deep-sea submersibles typically use spherical titanium or steel hulls, rigorously tested and certified by classification societies like DNV or the American Bureau of Shipping (ABS).

OceanGate, instead, chose a novel cylindrical hull made from carbon fiber reinforced polymer (CFRP) with titanium endcaps — a first for crewed submersibles at such depth. The problem wasn't just the material — it was the lack of third-party validation for such a radical change.


 1. Carbon Fiber Under Compression: A Known Risk


CFRPs excel in aerospace and automotive applications where tension and flexural strength dominate. But in deep-sea environments, compression resistance and fatigue durability under hydrostatic pressure are critical — and CFRP has known weaknesses in microcracking, delamination, and unpredictable failure modes under long-term cyclic loads.

Engineers in the field have long flagged that carbon composites, especially in thick, bonded layups, can develop interlaminar shear stress and voids that propagate suddenly, leading to brittle failure without warning. Traditional submarine hulls, in contrast, exhibit plastic deformation before rupture — buying precious time.

OceanGate reportedly reused the same CFRP hull over multiple dives, with no formal acoustic emission monitoring or strain gauge logging — both essential for tracking cumulative damage in composite structures.


 2. Classification and Certification Were Deliberately Avoided


OceanGate explicitly refused to have Titan certified by DNV-GL or ABS, stating on its website that such processes “inhibit innovation.” Instead, the company claimed to perform its own internal assessments — effectively self-certifying an experimental design operating at 3,800 meters.

In 2018, the Marine Technology Society’s Manned Underwater Vehicles Committee issued a letter warning that bypassing third-party review “could lead to catastrophic outcomes.” OceanGate’s CEO, Stockton Rush, dismissed the concern, famously arguing that “safety is a pure waste” and “at some point, you're going to have to take risks” — sentiments captured in interviews and internal emails later made public.


 3. Inadequate NDE and Monitoring of Structural Health

Titan lacked essential non-destructive evaluation (NDE) protocols. Submersibles of this class would typically be inspected using:


  • Ultrasonic testing for delamination or voids in composite sections 
  • Acoustic emission sensors to detect microcrack propagation 
  • Finite element simulations with third-party validation for stress mapping
     

Instead, Titan relied on a “Real-Time Health Monitoring System” — proprietary software that analyzed hull behavior during descent. However, it lacked peer-reviewed validation and was not transparent in its metrics. No peer-reviewed study confirmed whether Titan’s laminate layup, bonding technique, or fatigue behavior met any industry-recognized standard.

BD Alaris PC Unit 8015 infusion pump affected by FDA Class I recall for software and hardware issues

Case Study: BD Alaris Infusion Pump — A Multi-System Failure from Skipped Design Integration

 Between 2019 and 2020, the BD Alaris System, a modular infusion platform used to deliver IV fluids and medications, became the subject of multiple FDA Class I recalls—the highest severity category for medical devices (FDA, 2020).

The recalled devices exhibited systemic failures across hardware, software, and user interface, including:

  • Uncaught memory errors in the firmware, leading to intermittent infusion stoppage. 
  • Delayed dose delivery occurs due to bugs in the auto-programming feature during queue processing. 
  • Power module instability, where low-voltage shutdowns were triggered mid-infusion due to poor failover logic. 
  • Non-intuitive alarm prioritization, which obscured critical alerts and increased the risk of user override or delay.
     

The FDA linked these failures to one patient death and 55 injury reports. BD halted new shipments and issued multiple patches. But the technical cause wasn’t a single bug — it was a lack of robust system-level validation and cross-functional design review.


Why Peer Review Would Have Helped


BD’s internal teams had documentation of certain risks, but integration-level failure modes—how software, hardware, and user inputs interact under edge-case conditions—were not peer-reviewed or independently validated.


A thorough design validation process would have included:

  • dFMEA and FTA (Fault Tree Analysis) with input from software engineers, electrical engineers, and clinical users. 
  • Simulated alarm-storm scenarios to test alert suppression and override logic. 
  • Cross-module interface testing to ensure stability when syringe drivers, PCA modules, and PCU units were hot-swapped or cascaded.
     

Without this peer-reviewed architecture, failure points went undetected across subsystems—leading to safety risks in real clinical environments.


The Cost


BD suspended new product rollouts for over three years. The remediation effort, including software redesign, FDA re-submissions, and device re-certifications, cost hundreds of millions of dollars. In 2024, BD paid a $175 million SEC penalty for misleading investors by downplaying the scale of known risks (Reuters, 2024).


The Lesson


For complex medtech systems, peer review must extend beyond individual components. It must ask how the system behaves when real-world variability hits—power interruptions, network lags, simultaneous alerts, confused users. Without that scrutiny, design blind spots turn into patient risks and regulatory failures.

The Alaris case shows that even seasoned engineering teams miss what collective peer insight might catch. And when the system is critical to human health, missing just one interaction can be fatal.

The Verdict: Peer Review Is a Business Strategy — and a Scientific Responsibility

 Peer review is not just about checking a box. It’s about building right from the start.


Time and again — we’ve seen how skipping peer validation leads to failure, sometimes fatally. These weren’t rare exceptions; they were avoidable outcomes rooted in the same decision: to push forward without stopping to ask the hard questions. And often, without inviting the right people to ask them.

Yes, peer reviews take time. Yes, deadlines and investor pressure make it tempting to cut corners. But the data is clear: what you spend in review, you more than gain in savings, speed, safety — and survival.

And beyond the technical benefits, peer review builds something harder to measure but equally valuable:


  • Trust among your team, by making review a shared discipline, not a personal judgment. 
  • Respect within the scientific community, which recognizes transparency and rigor. 
  • Confidence among investors, who know that science isn’t a gamble when it's well-vetted. 
  • And momentum, because when you eliminate the “unknown unknowns” early, your roadmap becomes real.
     

At The Applied Science, we help innovators do it right from the beginning — through expert peer review, feasibility assessment, and strategy support. If you're building something complex, don’t do it alone. Let's build it right — together.

By Diana Saltymakova | Product Development Scientist | Published on May 13, 2025

Go-to platform for peer-reviewed R&D support in MedTech, diagnostics, and instrumentation
About Us

The Applied Science

FLEXIBLE R&D SUPPORT FOR MEDTECH AND INSTRUMENTATION COMPANIES.


Build investor confidence and accelerate development with trusted scientific reviews and hands-on execution.  

Request Service

Featured Blog Posts

#

Design Sprint

#

Journey of Medical Device

#

Jobs-to-Be-Done

Get Monthly Insights on Product Development

Smart strategies for innovators, and sharp thinking for the experts who support them.

  • Home
  • Request Service
  • R&D Pilot
  • Execution Support
  • Peer Review
  • Become Our Expert
  • Contact Us
  • Applied Science Blog
  • Privacy Policy
  • Usability Review

The Applied Science

The Applied Science is a registered partnership based in Ontario, Canada. We specialize in peer-reviewed consulting, technical validation, and regulatory support for medical devices, diagnostics, and scientific instrumentation. As a platform built by scientists, we consult MedTech, biotech, and engineering innovators. Our services include usability testing, feasibility analysis, risk reviews, and regulatory compliance consulting, all backed by senior scientific oversight. We support startups, scale-ups, and manufacturers across Canada and the United States with flexible delivery models — remote, on-site, or hybrid. Trusted by founders. Refer us to your investors. Recommend this platform to your clients. This is the go-to destination for expert-led product development, scientific guidance, and MedTech success.

Copyright© 2025 The Applied Science Network, Inc. All rights reserved.  

contact us at myappliedscience@gmail.com

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept