Certification of Medical Systems — Part 2

Written by Todd Brian, Validated Software.

In Part 1 of this series, we looked at off-the-shelf software, and the costs of saftey-critical products. In Part 2, we'll examine the processes for developing and validating safety-critical products.

Recalls of Medical Device Software

According to an FDA analysis of 3140 medical devices conducted between 1992 and 1998, 7.7% of them were recalled due to faulty software. Of those software related recalls, 79% were attributable to bugs introduced after the product’s initial release. The FDA reports that within the subsequent years ending in 2005, nearly one in three of all medical devices containing software were recalled.

Just three years later, in an April 29th, 2008 presentation titled “CDRH Software Update,” John F Murray Jr., an FDA software compliance expert, reported that faulty software accounted for 18% of all recalled devices. In a period of 10 years, the rate of software related recalls had increased a whopping 133%.

This suggests that if the growth rate of recalls remains constant, in 10 years, faulty software will account for over 40% of recalls within this industry.

The growing rate of recalls is a clear indication that the current regulatory process is inadequate. The fact that two thirds of bugs found in the recalled medical devices were introduced post release may be one reason that as of March 2010, all but the most minor of medical devices sold within the EU must now comply with a more rigorous medical software standard known as IEC 62304.

U.S. manufacturers, along with international and domestic stakeholders, participated in the creation of this international standard, even though it has not been adopted by the FDA. Today, IEC 62304 is the most cost-effective approach in meeting the certification needs of the global market.

Before looking at the EU and U.S. regulatory climate, let's begin to look at just what safety critical means in a medical context.

Safety Critical

Most medical software falls into a special category called safety-critical. The operating environment, operators, patients, and electro-mechanical portion of the medical device together make up a safety-critical system. The failure or improper operation of such system may allow or cause:

  • Injury or loss of life to a human or animal
  • Environmental damage
  • Damage or loss of capital equipment

The primary focus of medical software pertains to the safety of patients, operators and staff. Typical safety-critical software life cycle tasks include:

  • Planning
  • Requirements
  • Design
  • Coding and integration
  • Testing and verification
  • Configuration management (CM)
  • Quality assurance
  • Post release maintenance

Many of the activities that take place within software development can be likened to hardware redundancies that make up critical hardware systems. The redundant practices (i.e., code reviews; traceability, code coverage, etc.) remove single point failures. And while these can be costly, they are less expensive than fixing failures after the fact.

For manufacturers who have product families or products that share code, not only can code be reused, but all of the shared artifacts can as well (artifacts in this case include everything concerned with certification). In fact, because of the additional processes and practices mandated in safety-critical development, the relative savings gained through code reuse easily exceeds that of commercial software development.

There are several areas where the pain-points of consumer electronics match those of medical device development. Scheduling, time-to-market pressures, and sufficient staff to develop and maintain software such as embedded, real-time operating system (RTOS) kernels, networking stacks, and file systems, are all examples. In many situations, it is more cost effective to leave the development and maintenance to the experts and license software as needed.

When properly vetted, the use of commercial software can be a wise choice that speeds development, improves quality (or at least does not decrease it), reduces overall development costs, and reduces the stress placed on development teams. Commercial software allows developers to stay focused on achieving the core goals of the project, rather than developing commodity software.

Off-the-shelf software intended for general-purpose devices is not the same as software that has been developed, verified and validated for use in safety-critical devices. A medical device manufacturer using OTS software generally gives up software life cycle control, but still bears the responsibility for the continued safe and effective performance of the medical device.

The manufacturer has two choices when considering the use of OTS software in their design. One is to purchase software that despite the OTS label, is designed, verified, validated and comes with the same documentation that is expected by the FDA or othercertification agency. Micrium's µC/OS and several of its RTOS components fit this category. It has been deployed in many medical designs, and has 100% of the required documentation needed to comply with the FDA 510(k)/Pre-market Approval (PMA), and also complies with IEC 62304, IEC 60601, and ISO 14971.

An alternate choice involves using so-called “Software of Unknown Provenance” (SOUP). This is software that has not been developed with a documented software development process or methodology, or which has no safety-related properties. Using this option is at first appealing, as it appears to be much less expensive than properly-supported software, and in many cases, it offers great features. There are also inherent negatives. The primary negative is the code is not yet properly verified, validated and documented. The result is often a more expensive route.

For some guidance on using SOUP, have a look at this document: Guidance for Industry, FDA Reviewers and Compliance on Off-the-Shelf Software Use in Medical Devices, Office of Device Evaluation, Center for Devices and Radiological Health, Food and Drug Administration, September 1999.


Traceability is the ability to trace the history of an item back to its source, and this idea is used and implemented in many ways. Each product requirement must be traceable back to its origin, be it a general requirement, or a conversation with a user, the adoption of a standard, or adherence to a new regulation. A natural extension of this idea is to add a traceable attribute to each requirement, one that points to its implementation and validation. Doing so essentially documents the path from the origin of the requirement through its implementation and validation.

When traceability is not pursued with vigor, it can cause added project expense and missed schedules.

When used properly, and its role expanded to encompass and include the full gamut of software development activities (including the creation of specifications, software architecture design, detailed design, verification, validation, test and quality assurance, technical publications, maintenance, and software reuse), it provides insight into every aspect of the project life cycle.

The concept of traceability as originally intended for validation has rapidly expanded. Some companies consider it to be a core component of the Scope of Work (SOW). It provides a way to manage and estimate the cost of changes to project requirements. Traceability is used instead of prototyping to prove understanding and to communicate to clients the nature of the design.

Cost of Safety Critical Software

Most developers acquainted with safety-critical software will tell you that it is anything but cheap.

A rule of thumb is that the cost of commercial-grade software runs from $15 to $30 per line of code. Safety-critical software, on the other hand, generally costs five to ten times that amount — $75 to $300 per source line of software is not unrealistic. However, given that almost 20% of all recalled devices are due to software faults, product recalls add a significant amount to that total.

Other factors to consider include:

Many software developers still follow an 80/20 rule whereby 80% of the cost is in code and debug, while 20% is attributable to design.

Results from a 2002 study indicate that 1/3 of the cost associated with faulty software could be eliminated with proactive processes.

Up to $59 billion (2002) of waste is attributable to faulty software, and over half of that amount is bourn by users.

  • What is the impact of when a fault is introduced vs. when it is found?
  • What is the total cost of a recall?
  • What is the cost of civil litigation?
  • When is a bug introduced vs. when the bug is found?

Safety-critical software standards represent a rational compromise of social and market forces. On one side is stakeholder safety, and on the other, positive economic return for investors.

There is no such thing as bug-free software. But on the flip side, there comes a point where increasing the investment in quality will not generate a positive return when developing a safety-critical device. The quality of software can be determined by comparing its characteristics (features, capabilities, behavior) with the set of requirements that governed its creation. If its characteristics satisfy the set of requirements completely, high quality is achieved. On the other hand, if its characteristics do not meet the set of requirements, low quality results. Success in documenting software requirements is a crucial factor in the successful validation of the resulting software code.

Requirements are often described in terms of a hierarchy of requirements (i.e., high-level, and low-level).

For example, the system requirements for a medical device will include both hardware and software requirements. The software requirements can be further broken down into more granular sets that include requirements for an RTOS and application software. This process continues until the set of requirements cannot be decomposed any further, while still being actionable and measurable. Verification and validation each play an essential role in any discussion of software quality.

The terms verification and validation are often used interchangeably. Some even go as far as to use verification, validation and testing as if they mean the same thing. It is important to understand the difference of the two terms as they represent fundamental concepts in the safety-critical development processes. According to the FDA:

Verification addresses the question: “Are we making the software correctly?”

The objective of software verification is to perform, create, and document with objective evidence that the design outputs of each phase of the software development life cycle meet all of the specified requirements for that phase. Software verification looks for consistency, completeness, and correctness of the software and its supporting documentation as it is being developed. It provides support for a subsequent conclusion that software is validated. Software testing is one of many verification activities intended to confirm that software development output meets its input requirements. Other verification activities include various static and dynamic analyses, code and document inspections, walk-through, and other techniques.

In comparison, validation addresses the question: “Are we making the correct software?”

The FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”

In Part 3 of this series, we'll look at the regulatory environment for the medical device market.

Tags: , ,

Questions or Comments?

Have a question or a suggestion for a future article?
Don't hesitate to contact us and let us know!
All comments and ideas are welcome.