Common Problems with NP/PA 'Research'

  1. NPs/PAs studied are under physician supervision or following physician-created protocols. This supervision may be unrevealed, unclear, or downplayed. Very few studies (if any) compare unsupervised midlevels to attending physicians. 

  2. Failure to perform randomized controlled trials (RCTs). RCTs are considered the gold standard of research studies. However, most equivalency studies are not randomized controlled trials. Ethical questions remain regarding the potential for lower standards of care for patients assigned to the midlevel group.

  3. NPs/PAs are compared to partially trained physicians, such as interns or residents. Rather than comparing midlevels to attending physicians who have completed training, equivalency studies often compare experienced midlevels to interns (resident physicians in their first year of post-graduate training, who may only have months of experience in various aspects of internal medicine or surgery) or residents, who are also physicians-in-training.

  4. NPs/PAs may receive extra training that is not reflective of typical NP/PA practice. For example, in the famous Mundinger study, specially selected NPs received additional training with physicians prior to the study onset. This is not reflective of actual practice, and thus significantly limits the external validity of these studies. 

  5. Studies published prior to 2000 may no longer represent current NP/PA graduates. In the past 20 years, there has been a significant boom in the amount of direct-entry online NP programs, as well as the development of online PA programs. Studies done prior to 2000 do not reflect the current NP workforce in terms of quality of training and education.

  6. Studies with inadequate follow-up or time frame. Equivalency studies often only follow primary care outcomes for short periods, ranging from 6 months to two years or less. Studies included in the Cochrane database averaged just four months - one study was only two weeks in length. For most conditions, this time frame is simply inadequate to capture the mortality difference between no intervention and medical care, much less NP care versus physician-led care. Very few studies have a long enough follow-up period to adequately detect differences in outcome based on care. 

  7. Claims and headlines far exceed the actual data results. Equivalency studies may claim differences in endpoints like patient mortality, but surrogate data are used that may or may not reflect endpoints.

  8. Bias. The Cochrane report showed a strong bias in many studies reviewed, such as failure to follow intention-to-treat protocol and exclusion of problematic data points. 



Only 25% of all NPs in Oregon, an independent practice state, practiced in primary care settings.