Regular readers of this blog know that we’re closely following the FDA’s proposed regulatory framework for software as a medical device (SaMD), known as precertification—Pre-Cert for short. Generally, Pre-Cert involves a premarket evaluation of a software developer’s culture of quality and organizational excellence and continual, real-time postmarket analyses to assure software meets the statutory standard of reasonable assurance of safety and effectiveness.
The FDA’s 2019 Pre-Cert test plan features mock reviews of submissions already submitted to the agency using the proposed framework (i.e., retrospective reviews) as well as using submissions from software developers who intend to seek marketing authorization in 2019 to construct a streamlined review package that would be evaluated concurrently with the traditional regulatory submission (i.e., prospective review).
We were excited to see that FDA recently published an update on the Pre-Cert pilot in which they stated:
Retrospective reviews using an Excellence Appraisal and a Streamlined Review package “reported that a regulatory decision could be made using the information acquired” from such documents;
“The FDA has conducted several Excellence Appraisals with pilot participants” and “the FDA has confirmed that the elements identified in the [working model]…provide a comprehensive view of an organization’s capabilities”; and
“The FDA continues to probe the practicality of identifying Real-World Performance Analytics elements using specific test cases”; that is, FDA is still figuring out which real world data elements are meaningful in terms of evaluating SaMD safety and effectiveness.
However, the update does not contain any particularly enlightening information and reflects what FDA intended the Pre-Cert program to do all along. The probable purpose is to keep this program on the public’s radar and to show that progress is being made towards implementing it fully despite questions that remain unanswered. Some of those questions FDA admits it is continuing to explore (see: real world performance analytics) while others float in the ether.
One notable item missing from this update is a list of pilot participants beyond the original nine who volunteered to help FDA explore creating the new framework. Even if pilot participants preferred to keep their identities masked, FDA could have provided a number and characterized the participants as small, medium, or large companies. Depending on the data, doing so could help allay concerns that have been raised by some Pre-Cert watchers that there is minimal—if any—benefit for small companies to participate in a pilot or even a fully-authorized program. Among concerns we’ve heard from small companies are that this program—in particular the Excellence Appraisal—is no less time-consuming or burdensome than the traditional pathway, and may actually be more so.
And speaking of both authorization and unanswered questions, FDA still has not responded to the October 2018 letter from Senators Warren, Murray, and Smith about Pre-Cert. Congress may not take a kind eye to authorizing or funding an agency program when that agency is plowing ahead despite concerns raised by legislators. FDA could be looking to the next device user fee agreement to provide some legislative cover for their plans: those agreements are complex and difficult even for Congress to pick apart. Discussing the future of Pre-Cert during a user fee negotiation makes further sense because FDA has said they envision faster and in some cases no premarket reviews for Pre-Cert products. Will faster reviews cost more than a 510(k), De Novo, or PMA? Will software that requires no review have no user fee? If yes, how will FDA pay to monitor and analyze real world performance data?
These and other questions are part of what makes this program interesting to follow. The FDA update is here and stay tuned to Mintz Viewpoints for our continued coverage of software policy.