•  
  •  
 

Aims & Scope

Unlike most journals in the field, Health Professions Education is a multidisciplinary journal the seeks to contribute to theory and research inviting manuscripts from the full panorama of essions aimed at improving education. In contrast to other journals, that offer little opportunity for young researchers and educators to master the craft of scientific publishing in close interaction with experienced reviewers, Health Professions Education offers more extended editorial support to young researchers. We encourage beginning authors to submit their work to our journal.

The publication of new findings and ideas

Of course the journal provides ample room for the publication of new and exciting findings: experiments, correlational studies, case studies, and reviews that help our field of health professions education progress. But in addition, we wish to contribute to the solution of ailments of science recently uncovered; the unhealthy focus on statistically significant results and spectacular findings, poor replicability, and publication bias. We invite researchers to submit papers that contain (a) replications of landmark studies in the field, (b) non-significant findings of interesting hypotheses, and (c) papers reporting the development and evaluation of new measuring instruments for use in health professions education. In addition, we will provide opportunities for further discussion of a particular paper by publishing the reviewers' reports in conjunction with the paper itself and by inviting readers to join the discussion. Finally, we will invite the authors to publish their full data.

The publication of replications

Much of the follow-up work in health professions education is built upon a finite number of landmark studies, studies that have shown certain non-obvious effects important to the training of health professionals. Let us give you a few examples: (a) clinical reasoning is assumed to be case-specific, that is: performance on a particular (set of) clinical cases does not predict performance on other cases, (b) global ratings of student performance tend to be more accurate than specific ratings, (c) multiple-choice questions and open-ended questions essentially measure the same underlying knowledge, (d) students are not able to evaluate themselves accurately, (e) problem-based learning fosters long-term retention of knowledge. These are important ideas, but how stable are they? We call upon, in particular, young researchers in the field: Master's students and Ph.Ds to consider seriously the idea of replicating some of such findings as part of their degree work. If such replications are done well, we promise that we will publish them for you. And to supervisors we would suggest: we have seen many master students wrestle with attempts to come up with something new and original and we have seen many of them fail. Would replication not be an excellent alternative way of becoming familiar with the questions and methods that define the field?

The publication of non-significant findings

Negative outcomes of research usually have two sources: there is no effect of the treatment studied. Or the study was conducted so poorly that potential positive effects are masked by sloppy research practices. It may be clear that we are interested in publishing papers of the first category while avoiding to publish papers of the second category. Therefore, we encourage you to submit papers that report nonsignificant findings only if (a) the hypothesis studied is sufficiently interesting and embedded in existing literature, (b) the samples studied are carefully described, (c) the instruments used are either existing or have good reliability and validity, and (d) the statistical analyses are appropriate to the questions at hand. We are of course particularly interested in nonsignificant findings that help us evaluate the status of well-established theories or hypotheses in our field.

The publication of results of test development and evaluation

There was a time when educationalists spent a considerable amount of time and energy in the development, calibration, and validation of tests and other instruments useful for assessing students or conducting research. We observe that since journals do not any longer publish such reports, instrument design tends to be conducted sloppy and in an ad hoc fashion. We believe that the fact that every researcher develops his or her own instruments is one of the reasons why insufficient scientific progress is made in our field. Unlike the physical sciences there is no continuity and resourceful evolution in instrument design. We will enable researchers to take the art of instrument design seriously once again. If you submit a short report describing characteristics of an instrument, test, rating scale that can be of wider use, we will publish it because we strongly believe that, like in the physical sciences, progress can only be made if there are well-established protocols on how to measure particular constructs and researchers use each other’s well-calibrated instruments.

We invite you to submit your work to our journal.