Recent from talks
Nothing was collected or created yet.
IMRAD
View on WikipediaIn scientific writing, IMRAD or IMRaD (/ˈɪmræd/) (Introduction, Methods, Results, and Discussion)[1] is a common organizational structure for the format of a document. IMRaD is the most prominent norm for the structure of a scientific journal article of the original research type.[2]
Overview
[edit]
Original research articles are typically structured in this basic order[3][4][5]
- Introduction – Why was the study undertaken? What was the research question, the tested hypothesis or the purpose of the research?
- Methods – When, where, and how was the study done? What materials were used or who was included in the study groups (patients, etc.)?
- Results – What answer was found to the research question; what did the study find? Was the tested hypothesis true?
- Discussion – What might the answer imply and why does it matter? How does it fit in with what other researchers have found? What are the perspectives for future research?
The plot and the flow of the story of the IMRaD style of writing are explained by a 'wine glass model'[4] or hourglass model.[3]
Writing, compliant with IMRaD format (IMRaD writing) typically first presents "(a) the subject that positions the study from the wide perspective", "(b) outline of the study", develops through "(c) study method", and "(d) the results", and concludes with "(e) outline and conclusion of the fruit of each topics", and "(f) the meaning of the study from the wide and general point of view".[4] Here, (a) and (b) are mentioned in the section of the "Introduction", (c) and (d) are mentioned in the section of the "Method" and "Result" respectively, and (e) and (f) are mentioned in the section of the "Discussion" or "Conclusion".
In this sense, to explain how to line up the information in IMRaD writing, the 'wine glass model' (see the pattern diagram shown in Fig.1) will be helpful (see pp 2–3 of the Hilary Glasman-deal [4]). As mentioned in abovementioned textbook,[4] the scheme of 'wine glass model' has two characteristics. The first one is "top-bottom symmetric shape", and the second one is "changing width" i.e. "the top is wide and it narrows towards the middle, and then widens again as it goes down toward the bottom".
The First one, "top-bottom symmetric shape", represents the symmetry of the story development. Note the shape of the top trapezoid (representing the structure of Introduction) and the shape of the trapezoid at the bottom are reversed. This is expressing that the same subject introduced in Introduction will be taken up again in suitable formation for the section of Discussion/Conclusion in these section in the reversed order. (See the relationship between abovementioned (a), (b) and (e), (f).)
The Second one, "the change of the width" of the schema shown in Fig.1, represents the change of generality of the view point. As along the flow of the story development, when the viewpoints are more general, the width of the diagram is expressed wider, and when they are more specialized and focused, the width is expressed narrower.
As the standard format of academic journals
[edit]The IMRAD format has been adopted by a steadily increasing number of academic journals since the first half of the 20th century. The IMRAD structure has come to dominate academic writing in the sciences, most notably in empirical biomedicine.[2][6][7] The structure of most public health journal articles reflects this trend. Although the IMRAD structure originates in the empirical sciences, it now also regularly appears in academic journals across a wide range of disciplines. Many scientific journals now not only prefer this structure but also use the IMRAD acronym as an instructional device in the instructions to their authors, recommending the use of the four terms as main headings. For example, it is explicitly recommended in the "Uniform Requirements for Manuscripts Submitted to Biomedical Journals" issued by the International Committee of Medical Journal Editors (previously called the Vancouver guidelines):
The text of observational and experimental articles is usually (but not necessarily) divided into the following sections: Introduction, Methods, Results, and Discussion. This so-called "IMRAD" structure is not an arbitrary publication format but rather a direct reflection of the process of scientific discovery. Long articles may need subheadings within some sections (especially Results and Discussion) to clarify their content. Other types of articles, such as case reports, reviews, and editorials, probably need to be formatted differently.[8]
The IMRAD structure is also recommended for empirical studies in the 6th edition of the publication manual of the American Psychological Association (APA style).[9] The APA publication manual is widely used by journals in the social, educational and behavioral sciences.[10]
Benefits
[edit]The IMRAD structure has proved successful because it facilitates literature review, allowing readers to navigate articles more quickly to locate material relevant to their purpose.[11] But the neat order of IMRAD rarely corresponds to the actual sequence of events or ideas of the research presented; the IMRAD structure effectively supports a reordering that eliminates unnecessary detail, and allows the reader to assess a well-ordered and noise-free presentation of the relevant and significant information. It allows the most relevant information to be presented clearly and logically to the readership, by summarizing the research process in an ideal sequence and without unnecessary detail.
Caveats
[edit]The idealised sequence of the IMRAD structure has on occasion been criticised for being too rigid and simplistic. In a radio talk in 1964 the Nobel laureate Peter Medawar criticised this text structure for not giving a realistic representation of the thought processes of the writing scientist: "… the scientific paper may be a fraud because it misrepresents the processes of thought that accompanied or gave rise to the work that is described in the paper".[12] Medawar's criticism was discussed at the XIXth General Assembly of the World Medical Association in 1965.[13][14] While respondents may argue that it is too much to ask from such a simple instructional device to carry the burden of representing the entire process of scientific discovery, Medawar's caveat expressed his belief that many students and faculty throughout academia treat the structure as a simple panacea. Medawar and others have given testimony both to the importance and to the limitations of the device.
Abstract considerations
[edit]In addition to the scientific article itself, a brief abstract is usually required for publication. The abstract should, however, be composed to function as an autonomous text, even if some authors and readers may think of it as an almost integral part of the article. The increasing importance of well-formed autonomous abstracts may well be a consequence of the increasing use of searchable digital abstract archives, where a well-formed abstract will dramatically increase the probability for an article to be found by its optimal readership.[15] Consequently, there is a strong recent trend toward developing formal requirements for abstracts, most often structured on the IMRAD pattern, and often with strict additional specifications of topical content items that should be considered for inclusion in the abstract.[16] Such abstracts are often referred to as structured abstracts.[17] The growing importance of abstracts in the era of computerized literature search and information overload has led some users to modify the IMRAD acronym to AIMRAD, in order to give due emphasis to the abstract.
Heading style variations
[edit]Usually, the IMRAD article sections use the IMRAD words as headings. A few variations can occur, as follows:
- Many journals have a convention of omitting the "Introduction" heading, based on the idea that the reader who begins reading an article does not need to be told that the beginning of the text is the introduction. This print-era proscription is fading since the advent of the Web era, when having an explicit "Introduction" heading helps with navigation via document maps and collapsible/expandable TOC trees. (The same considerations are true regarding the presence or proscription of an explicit "Abstract" heading.)
- In some journals, the "Methods" heading may vary, being "Methods and materials", "Materials and methods", or similar phrases. Some journals mandate that exactly the same wording for this heading be used for all articles without exception; other journals reasonably accept whatever each submitted manuscript contains, as long as it is one of these sensible variants.
- The "Discussion" section may subsume any "Summary", "Conclusion", or "Conclusions" section, in which case there may or may not be any explicit "Summary", "Conclusion", or "Conclusions" subheading; or the "Summary"/"Conclusion"/"Conclusions" section may be a separate section, using an explicit heading on the same heading hierarchy level as the "Discussion" heading. Which of these variants to use as the default is a matter of each journal's chosen style, as is the question of whether the default style must be forced onto every article or whether sensible inter-article flexibility will be allowed. The journals which use the "Conclusion" or "Conclusions" along with a statement about the "Aim" or "Objective" of the study in the "Introduction" is following the newly proposed acronym "IaMRDC" which stands for "Introduction with aim, Materials and Methods, Results, Discussion, and Conclusion."[18]
Other elements that are typical although not part of the acronym
[edit]- Disclosure statements (see main article at conflicts of interest in academic publishing)
- Reader's theme that is the point of this element's existence: "Why should I (the reader) trust or believe what you (the author) say? Are you just making money off of saying it?"
- Appear either in opening footnotes or a section of the article body
- Subtypes of disclosure:
- Disclosure of funding (grants to the project)
- Disclosure of conflict of interest (grants to individuals, jobs/salaries, stock or stock options)
- Clinical relevance statement
- Reader's theme that is the point of this element's existence: "Why should I (the reader) spend my time reading what you say? How is it relevant to my clinical practice? Basic research is nice, other people's cases are nice, but my time is triaged, so make your case for 'why bother'"
- Appear either as a display element (sidebar) or a section of the article body
- Format: short, a few sentences or bullet points
- Ethical compliance statement
- Reader's theme that is the point of this element's existence: "Why should I believe that your study methods were ethical?"
- "We complied with the Declaration of Helsinki."
- "We got our study design approved by our local institutional review board before proceeding."
- "We got our study design approved by our local ethics committee before proceeding."
- "We treated our animals in accordance with our local Institutional Animal Care and Use Committee."
- Diversity, equity, and inclusion statement[19]
- Reader's theme that is the point of this element's existence: "Why should I believe that your study methods consciously included people?" (for example, avoided inadvertently underrepresenting some people—participants or researchers—by race, ethnicity, sex, gender, or other factors)
- "We worked to ensure that people of color and transgender people were not underrepresented among the study population."
- "One or more of the authors of this paper self-identifies as living with a disability."
- "One or more of the authors of this paper self-identifies as transgender."
Additional standardization (reporting guidelines)
[edit]This article's use of external links may not follow Wikipedia's policies or guidelines. (March 2022) |
In the late 20th century and early 21st, the scientific communities found that the communicative value of journal articles was still much less than it could be if best practices were developed, promoted, and enforced. Thus reporting guidelines (guidelines for how best to report information) arose. The general theme has been to create templates and checklists with the message to the user being, "your article is not complete until you have done all of these things." In the 1970s, the ICMJE (International Committee of Medical Journal Editors) released the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (Uniform Requirements or URM). Other such standards, mostly developed in the 1990s through 2010s, are listed below. The academic medicine community is working hard on trying to raise compliance with good reporting standards, but there is still much to be done;[20] for example, a 2016 review of instructions for authors in 27 emergency medicine journals found insufficient mention of reporting standards,[21] and a 2018 study found that even when journals' instructions for authors mention reporting standards, there is a difference between a mention or badge and enforcing the requirements that the mention or badge represents.[22]
The advent of a need for best practices in data sharing has expanded the scope of these efforts beyond merely the pages of the journal article itself. In fact, from the most rigorous versions of the evidence-based perspective, the distance to go is still quite formidable.[23] FORCE11 is an international coalition that has been developing standards for how to share research data sets properly and most effectively.
Most researchers cannot be familiar with all of the many reporting standards that now exist, but it is enough to know which ones must be followed in one's own work, and to know where to look for details when needed. Several organizations provide help with this task of checking one's own compliance with the latest standards:
- The EQUATOR Network
- The BioSharing collaboration (biosharing.org)
Several important webpages on this topic are:
- NLM's list at Research Reporting Guidelines and Initiatives: By Organization
- The EQUATOR Network's list at Reporting guidelines and journals: fact & fiction
- TRANSPOSE (Transparency in Scholarly Publishing for Open Scholarship Evolution), "a grassroots initiative to build a crowdsourced database of journal policies," allowing faster and easier lookup and comparison, and potentially spurring harmonization
Relatedly, SHERPA provides compliance-checking tools, and AllTrials provides a rallying point, for efforts to enforce openness and completeness of clinical trial reporting. These efforts stand against publication bias and against excessive corporate influence on scientific integrity.
| Short name | Longer name | Best link | Organization that fostered it | Goals/Notes |
|---|---|---|---|---|
| AMSTAR | (A Measurement Tool to Assess Systematic Reviews) | amstar.ca | AMSTAR team | Provides a tool to test the quality of systematic reviews |
| ARRIVE | (Animal Research: Reporting of In Vivo Experiments) | www.nc3rs.org.uk/arrive-guidelines | NC3Rs | Seeks to improve the reporting of research using animals (maximizing information published and minimizing unnecessary studies) |
| CARE | (Consensus-based Clinical Case Reporting Guideline Development) | www.equator-network.org/reporting-guidelines/care | CARE Group | Seeks completeness, transparency, and data analysis in case reports and data from the point of care |
| CHEERS | (Consolidated Health Economic Evaluation Reporting Standards) | www.ispor.org/Health-Economic-Evaluation-Publication-CHEERS-Guidelines.asp Archived 2017-04-18 at the Wayback Machine | ISPOR | Seeks value in health care |
| CONSORT | (Consolidated Standards of Reporting Trials) | www.consort-statement.org | CONSORT Group | Provides a minimum set of recommendations for reporting randomized trials |
| COREQ | (Consolidated Criteria for Reporting Qualitative Research) | www.equator-network.org/reporting-guidelines/coreq/ | University of Sydney | Seeks quality in reporting of qualitative research by providing a 32-item checklist for interviews and focus groups |
| EASE guidelines | (EASE Guidelines for Authors and Translators of Scientific Articles to be Published in English | www.ease.org.uk/publications/author-guidelines-authors-and-translators/ Archived 2018-11-25 at the Wayback Machine | EASE | Seeks quality reporting of all scientific literature |
| Empirical Standards | ACM SIGSOFT Empirical Standards for Software Engineering Research | https://acmsigsoft.github.io/EmpiricalStandards/ | ACM SIGSOFT | Provides methodology-specific research and reporting guidelines, checklists and reviewing systems |
| ENTREQ | (Enhancing Transparency in Reporting the Synthesis of Qualitative Research) | www.equator-network.org/reporting-guidelines/entreq/ | Various universities | Provides a framework for reporting the synthesis of qualitative health research |
| FAIR | (findability, accessibility, interoperability, and reusability) | doi.org/10.1038/sdata.2016.18 | Various organizations | High-level goals, allowing for various ways to achieve them; specifies "what" is wanted and "why", allowing the "how" to be determined by the researcher |
| ICMJE | (Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals; formerly known as the Uniform Requirements for Manuscripts Submitted to Biomedical Journals) | www.icmje.org/recommendations | ICMJE | Seeks quality in medical journal articles |
| JARS | Journal Article Reporting Standards | www.apastyle.org/manual/related/JARS-MARS.pdf | American Psychological Association | Seeks quality in psychological research reporting; published in the appendix of the APA Publication Manual |
| MARS | Meta-Analysis Reporting Standards | www.apastyle.org/manual/related/JARS-MARS.pdf | American Psychological Association | Seeks quality in psychological research reporting; published in the appendix of the APA Publication Manual |
| MI | Minimum Information standards | biosharing.org | Various organizations | A family of standards for bioscience reporting, developed by the various relevant specialty organizations and collated by the BioSharing portal (biosharing.org) (formerly collated by the MIBBI portal [Minimum Information about a Biomedical or Biological Investigation]) |
| MOOSE | (Meta-analysis Of Observational Studies in Epidemiology) | jamanetwork.com/journals/jama/article-abstract/192614 | MOOSE group (various organizations) | Seeks quality in meta-analysis of observational studies in epidemiology |
| NOS | (Newcastle–Ottawa scale) | http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp | University of Newcastle, Australia and University of Ottawa | Assesses quality of nonrandomized studies included in a systematic review and/or meta-analysis |
| PRISMA | (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) | www.prisma-statement.org | PRSIMA group | Seeks quality in systematic reviews and meta-analyses, especially in the medical literature, but applicable to most scientific literature; PRISMA supersedes QUOROM |
| REMARK | (Reporting Recommendations for Tumor Marker Prognostic Studies) | doi.org/10.1093/jnci/dji237 | NCI and EORTC | Seeks quality in reporting of tumor marker research |
| RR | (registered reports) | cos.io/rr | Center for Open Science | Applies the principles of preregistration with the aim to improve both the quality of science being done and the quality of its reporting in journals. Aims to improve the incentivization of scientists by removing perverse incentives that encourage publication bias and inappropriate/excessive forms of post hoc analysis; it involves two peer review steps: one before results reporting (to review methodology alone) and another after results reporting. |
| SAMPL | (Statistical Analyses and Methods in the Published Literature) | www.equator-network.org/wp-content/uploads/2013/03/SAMPL-Guidelines-3-13-13.pdf | Centre for Statistics in Medicine at Oxford University | Seeks quality in statistics in the biomedical literature |
| SPIRIT | (Standard Protocol Items: Recommendations for Interventional Trials) | www.spirit-statement.org | SPIRIT Group (various organizations) | Seeks quality in clinical trial protocols by defining an evidence-based set of items to address in every protocol |
| SQUIRE | (Standards for Quality Improvement Reporting Excellence) | www.squire-statement.org | SQUIRE team (various organizations) | Provides a framework for reporting new knowledge about how to improve healthcare; intended for reports that describe system level work to improve the health care quality, patient safety, and value in health care |
| SRQR | (Standards for Reporting Qualitative Research: A Synthesis of Recommendations) | doi.org/10.1097/ACM.0000000000000388 | Various medical schools | Provides standards for reporting qualitative research |
| STAR | Structured, Transparent, Accessible Reporting | www.cell.com/star-authors-guide | Cell Press | Improved reporting of methods to aid reproducibility and researcher workflow[24] |
| STARD | (Standards for the Reporting of Diagnostic Accuracy Studies) | www.stard-statement.org | STARD Group (various organizations) | Diagnostic accuracy |
| STROBE | (Strengthening the Reporting of Observational Studies in Epidemiology) | www.strobe-statement.org | STROBE Group (various organizations) | Seeks quality in reporting of observational studies in epidemiology |
| TOP | (Transparency and Openness Promotion) | cos.io/top/ | (Center for Open Science) | Codifies 8 modular standards, for each of which a journal's editorial policy can pledge to meet a certain level of stringency (Disclose, Require, or Verify) |
| TREND | (Transparent Reporting of Evaluations with Nonrandomized Designs) | www.cdc.gov/trendstatement | TREND Group (various organizations) | Seeks to improve the reporting standards of nonrandomized evaluations of behavioral and public health interventions |
| TRIPOD | (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) | doi.org/10.7326/M14-0697 | Centre for Statistics in Medicine (Oxford University) and Julius Center for Health Sciences and Primary Care (University Medical Center Utrecht) | Provides a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes |
| URM / ICMJE | (Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals; formerly known as the Uniform Requirements for Manuscripts Submitted to Biomedical Journals) | www.icmje.org/recommendations | ICMJE | Seeks quality in medical journal articles |
See also
[edit]References
[edit]- ^ P. K. R. Nair and V. D. Nair (2014). Scientific Writing and Communication in Agriculture and Natural Resources. Springer. p. 13.
- ^ a b Sollaci LB, Pereira MG (July 2004). "The introduction, methods, results, and discussion (IMRAD) structure: a fifty-year survey". Journal of the Medical Library Association. 92 (3): 364–7. PMC 442179. PMID 15243643.
- ^ a b Mogull SA (2017). Scientific And Medical Communication: A Guide For Effective Practice. New York: Routledge. ISBN 978-1-138-84255-7. Archived from the original on 2023-06-23. Retrieved 2017-07-20.
- ^ a b c d e Glasman-deal H (2009). Science research writing for non-native speakers of English. Imperial College Press. ISBN 978-1-84816-310-2.
- ^ Hall GM, ed. (December 2012). How to write a paper (5th ed.). Wiley-Blackwell, BMJ Books. ISBN 978-0-470-67220-4.
- ^ Day, RA (1989). "The Origins of the Scientific Paper: The IMRAD Format" (PDF). American Medical Writers Association Journal. 4 (2): 16–18. Archived from the original (PDF) on September 27, 2011. Retrieved 2011-06-17.
- ^ Szklo M (2006). "Quality of scientific articles". Revista de Saúde Pública. 40: 30–35. doi:10.1590/s0034-89102006000400005. PMID 16924300.
- ^ "Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication - IV.A.1.a. General Principles" (PDF). International Committee of Medical Journal Editors. Archived from the original (PDF) on July 6, 2010. Retrieved 2010-03-08.
- ^ American Psychological Association (2010). Publication Manual of the American Psychological Association (6th ed.). American Psychological Association. ISBN 978-1-4338-0562-2.
- ^ "The IMRAD Research Paper Format". Department of Translation Studies, University of Tampere. Archived from the original on 2008-10-25. Retrieved 2008-10-22.
- ^ Burrough-Boenisch, J (1999). "International Reading Strategies for IMRD Articles". Written Communication. 16 (3): 296–316. doi:10.1177/0741088399016003002. S2CID 145686459.
- ^ Medawar, P (1964). "Is the scientific paper fraudulent?". The Saturday Review (August 1): 42–43.
- ^ Brain, L (1965). "Structure of the scientific paper". Br Med J. 2 (5466): 868–869. doi:10.1136/bmj.2.5466.868. PMC 1846354. PMID 5827805.
- ^ "Report of Editors' Conference". BMJ. 2 (5466): 870–872. 9 October 1965. doi:10.1136/bmj.2.5466.870. PMC 1846363. PMID 20790709.
- ^ "Structured Abstract Initiative". Education Resources Information Center. Archived from the original on June 8, 2011. Retrieved 2011-06-17.
- ^ Ripple AM, Mork JG, Knecht LS, Humphreys BL (April 2011). "A retrospective cohort study of structured abstracts in MEDLINE, 1992-2006". Journal of the Medical Library Association. 99 (2): 160–3. doi:10.3163/1536-5050.99.2.009. PMC 3066587. PMID 21464855.
- ^ U.S. National Library of Medicine (2011-06-16). "Structured Abstracts".
- ^ Mondal, Himel; Mondal, Shaikat; Saha, Koushik (2019). "What to Write in Each Segment of an Original Article?". Indian Journal of Vascular and Endovascular Surgery. 6 (3): 221. doi:10.4103/ijves.ijves_38_19. ISSN 0972-0820.
- ^ Cell Press (2021), Cell Press inclusion and diversity statement FAQ, Cell Press, retrieved 2021-01-27.
- ^ Couzin-Frankel, Jennifer (2018-09-19). "'Journalologists' use scientific methods to study academic publishing. Is their work improving science?". Science. doi:10.1126/science.aav4758. ISSN 0036-8075. S2CID 115360831.
- ^ Sims MT, Henning NM, Wayant CC, Vassar M (November 2016). "Do emergency medicine journals promote trial registration and adherence to reporting guidelines? A survey of "Instructions for Authors"". Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine. 24 (1) 137. doi:10.1186/s13049-016-0331-3. PMC 5121955. PMID 27881175.
- ^ Leung V, Rousseau-Blass F, Beauchamp G, Pang DS (2018-05-24). "ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia". PLOS ONE. 13 (5) e0197882. Bibcode:2018PLoSO..1397882L. doi:10.1371/journal.pone.0197882. PMC 5967836. PMID 29795636.
- ^ Jefferson T, Jørgensen L (April 2018). "Redefining the 'E' in EBM". BMJ Evidence-Based Medicine. 23 (2): 46–47. doi:10.1136/bmjebm-2018-110918. PMID 29595127.
- ^ Marcus E (August 2016). "A STAR Is Born". Cell. 166 (5): 1059–1060. doi:10.1016/j.cell.2016.08.021. PMID 27565332.
IMRAD
View on GrokipediaDefinition and Origins
Core Components
IMRAD is an acronym for Introduction, Methods, Results, and Discussion, representing a standardized organizational framework for scientific manuscripts that facilitates the logical reporting of empirical research. This structure guides authors in presenting their work as a coherent narrative, progressing from the rationale and context of the study to its execution, outcomes, and implications. Widely adopted in fields such as the natural sciences, social sciences, and engineering, IMRAD ensures clarity and reproducibility in communicating research findings.[3][1][4] The Introduction serves to establish the research problem and its significance by providing essential background and a concise review of pertinent literature, thereby situating the study within the existing body of knowledge. It identifies specific gaps, unanswered questions, or limitations in prior work that justify the current investigation, articulates the study's objectives or hypotheses, and explains the broader relevance of the research to advance scientific understanding or address practical needs. This section typically concludes by outlining the study's scope, setting the stage for the subsequent components.[3][1][4] The Methods section describes the study's design, materials, procedures, and analytical approaches in sufficient detail to allow for replication by independent researchers. It encompasses elements such as the selection of participants or samples, experimental protocols, instrumentation, data collection techniques, and statistical or computational methods used for analysis, often employing past tense and passive voice for objectivity. By emphasizing transparency and precision, this component enables verification of the results and assessment of the study's validity.[3][1][4] The Results section objectively reports the primary findings derived from the methods, using data presentations such as tables, figures, graphs, and statistical summaries to convey key trends, patterns, and outcomes without offering explanations or interpretations. It focuses on the most relevant evidence supporting the research objectives, including measures of central tendency, variability, and significance tests, while avoiding speculative commentary. This separation maintains a clear distinction between raw evidence and its analysis.[3][1][4] The Discussion interprets the results by relating them back to the hypotheses or objectives stated in the Introduction, evaluating how they align with or diverge from existing literature to highlight contributions to the field. It critically examines the implications of the findings, acknowledges methodological limitations or potential biases, and proposes avenues for future investigations or applications. Through this synthesis, the Discussion integrates the study's evidence into the wider scientific context, underscoring its impact.[3][1][4] Collectively, the IMRAD sections form a sequential flow: the Introduction builds foundational context, the Methods provide the evidentiary framework, the Results deliver unadorned data, and the Discussion offers analytical depth, creating a cohesive progression from problem identification to insightful resolution.[3][1]Historical Development
The IMRAD format emerged from evolving conventions in scientific writing during the 19th century, when reports began to include dedicated descriptions of methods and an organizational pattern separating theory, experiments (including observations), and discussion to improve logical flow and reproducibility. This separation of empirical observations from interpretive analysis laid foundational elements for structured reporting, as seen in publications from learned societies like the Royal Society, whose Philosophical Transactions increasingly emphasized factual accounts of experiments distinct from speculative commentary. An early precursor appeared in Louis Pasteur's 1859 addition of a methods section to scientific articles, marking a shift toward systematic presentation that essentially birthed the core of IMRAD. By the early 20th century, recommendations for an ideal IMRAD structure surfaced in writing guides, such as those by Melish and Wilson in 1922 and Trelease and Yule in 1925, though adoption remained sporadic. The format gained traction in the 1940s within medical journals, including the British Medical Journal, JAMA, The Lancet, and the New England Journal of Medicine, where initial uses exceeded traditional narrative styles. In the 1950s, usage in these journals surpassed 10%, bolstered by prior integration in physics publications like Physical Review, which analyzed spectroscopic articles from 1893 to 1980 showing progressive structuring. A pivotal influence was the work of Robert A. Day, whose guidelines on structured abstracts in the 1950s and later editions of his 1979 book How to Write and Publish a Scientific Paper advocated for IMRAD to enhance readability in medical literature. The 1960s and 1970s saw significant expansion, particularly in biology and medicine, driven by the Council of Biology Editors (now Council of Science Editors). Their style manual, first published in 1960 and revised through subsequent editions like the 1978 fourth edition, explicitly promoted the IMRAD format as a standard for organizing research articles, leading to over 80% adoption in surveyed medical journals by the late 1970s. The Vancouver Group (later ICMJE) further propelled this in 1978 with uniform requirements for manuscripts, emphasizing structured reporting for international collaboration. By 1975, the New England Journal of Medicine had fully embraced IMRAD, followed by the British Medical Journal in 1980, JAMA and The Lancet in 1985. From the 1980s onward, IMRAD integrated into diverse fields including physics, social sciences, and engineering via high-impact journals such as Nature and Science, which standardized it for research articles to accommodate multidisciplinary audiences. Guidelines like those from the ANSI Z39.16 in 1972 and 1979 formalized it nationally, while later protocols such as CONSORT (introduced in 1996 but building on 1980s trends) reinforced its use in clinical and experimental reporting. Key drivers included the exponential growth in research output, necessitating modular formats for efficient peer review and global dissemination, as well as the internationalization of science requiring consistent structures across languages and disciplines.Detailed Structure
Introduction
The Introduction section of an IMRAD-structured scientific paper serves to establish the research context by guiding readers from a broad overview of the field to the specific aims of the study, employing a funnel approach that narrows progressively. This structure begins with general background information on the topic's significance, transitions to a synthesis of existing knowledge from key studies, identifies a clear knowledge gap or unresolved issue, and culminates in the study's rationale, objectives, or hypotheses.[5][6] Such organization ensures logical progression, with the broad end providing relevance and the narrow end justifying the research's necessity, typically spanning 500-1000 words or about 10-15% of the manuscript's total length excluding abstract and references.[7][8] Key elements in the Introduction include citations to 10-20 seminal or high-impact studies that frame the current understanding, a concise rationale explaining why the gap matters, and explicit statements of research questions, hypotheses, or objectives to delineate the study's scope. Authors should prioritize recent, pertinent references to avoid redundancy, focusing on conceptual synthesis rather than exhaustive listings, while outlining the study's novelty without delving into methods or results. For instance, the background might cite foundational works establishing the field's importance, followed by targeted references highlighting limitations in prior approaches.[5] Writing strategies emphasize clarity and flow: use passive voice for neutral background descriptions (e.g., "Previous studies have shown...") to maintain objectivity, shift to active voice for objectives (e.g., "This study investigates...") to convey direct intent, and incorporate transitional phrases like "However," or "Building on this," to ensure seamless narrowing. These techniques promote readability and engagement, aligning with modern guidelines favoring concise, audience-oriented prose.[9][10] Common pitfalls in crafting the Introduction include overly broad or verbose openings that dilute focus, such as starting with tangential global issues instead of field-specific context, or making unsubstantiated claims without evidential support, which undermines credibility. For example, a verbose opening might read: "Climate change has affected ecosystems worldwide since the Industrial Revolution, leading to various environmental disruptions that scientists have long studied in multiple disciplines," whereas a concise version sharpens to: "Rising temperatures have accelerated coral bleaching in tropical reefs, with models predicting 90% loss by 2050 if unchecked." Excessive literature review, resembling a standalone summary rather than integrated context, also risks overlap with the Discussion section's deeper analysis. To mitigate these, authors should iteratively revise for precision, ensuring every sentence advances the funnel toward the study's aims.[11][12][5] As an illustrative example from a hypothetical biology study on climate impacts, consider this opening paragraph: "Coral reefs, vital to marine biodiversity, face unprecedented threats from ocean warming, which induces bleaching events that disrupt symbiotic algae-host relationships. Recent surveys indicate that global reef coverage has declined by 14% since 2009, primarily due to recurrent heat stress episodes. While physiological mechanisms of bleaching are well-documented in controlled lab settings, field-based assessments of recovery resilience in diverse reef systems remain limited, particularly for mesophotic zones below 30 meters. This study addresses this gap by examining thermal tolerance thresholds in Hawaiian mesophotic corals, hypothesizing that depth gradients confer adaptive advantages against projected warming scenarios."Methods
The Methods section in an IMRAD-structured scientific paper provides a detailed, chronological account of the research procedures, ensuring that other researchers can replicate the study exactly. This transparency is essential for verifying the validity of the findings and advancing scientific knowledge through reproducible experiments. Unlike the Introduction, which outlines the rationale and objectives, the Methods operationalizes these by specifying how the study was conducted, including design choices, materials, and protocols. Written in the past tense and often using passive voice to emphasize actions over actors, the section avoids any discussion of results or interpretations, reserving those for later sections.[13][14] Key principles guide the content to prioritize replicability and rigor. Authors must include precise details such as exact dosages, equipment models, software versions, and environmental conditions, allowing a skilled peer to recreate the study without ambiguity. For instance, rather than stating "cells were cultured," one might specify "Human embryonic kidney 293 cells (ATCC CRL-1573) were maintained in Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum at 37°C and 5% CO₂." Justifications for methodological choices, such as why a particular statistical test was selected, enhance credibility, though these are kept factual and sourced where applicable. Ethical considerations are integral, with statements confirming institutional review board (IRB) approval, informed consent procedures, and compliance with standards like the Declaration of Helsinki. Variability in the study is addressed through descriptions of controls, randomization sequences, and blinding to minimize bias, ensuring the design's robustness.[15][16]Participants/Subjects
This subsection outlines the selection and characteristics of study participants or subjects, providing criteria that define the target population and ensure generalizability. Eligibility requirements, including inclusion and exclusion criteria, are stated clearly to allow assessment of the sample's representativeness. For human subjects, demographic details such as age range, gender distribution, and recruitment methods (e.g., via advertisements or clinic referrals) are reported, often with the number screened and reasons for exclusions. In animal studies, strain, age, sex, and housing conditions are specified to account for biological variability. Sample size determination is a critical element, calculated a priori to achieve adequate statistical power. For studies estimating proportions, a common formula is used: where is the sample size, is the Z-score for the desired confidence level (e.g., 1.96 for 95%), is the estimated population proportion, and is the margin of error. This formula assumes an infinite population and is conservative when , maximizing the sample size needed. For example, to estimate a proportion with 95% confidence and 5% margin of error assuming , . Finite population corrections may adjust this further if the population size is known. Randomization and allocation methods follow, such as using computer-generated sequences to assign participants to groups, preventing selection bias. Blinding, where applicable (e.g., double-blind for treatments), is described, including who was blinded (participants, investigators, or analysts) and how similarity between interventions was maintained.[17][18]Materials/Equipment
Here, all physical and digital resources used in the study are inventoried with precise specifications to facilitate replication. This includes reagents, instruments, and software, often listed chronologically as they appear in the procedure. For laboratory-based research, details encompass supplier information, lot numbers for biological materials, and calibration standards for equipment. In clinical contexts, medications are identified by generic name, dosage, administration route, and storage conditions. For instance, in a pharmacology study, one might report: "Aspirin (acetylsalicylic acid, Sigma-Aldrich, catalog no. A5376) was dissolved in phosphate-buffered saline to achieve a 500 mg/L stock solution, stored at 4°C, and administered orally at 100 mg/kg body weight." Software for data management or analysis is versioned explicitly, such as "ImageJ version 1.53 (National Institutes of Health) for image processing." These details prevent confounding from variations in quality or functionality, upholding the study's integrity. Ethical sourcing, such as animal-derived materials compliant with welfare guidelines, is implied through referenced protocols.[14][13]Procedures/Step-by-Step Protocol
The core of the Methods, this subsection narrates the experimental or observational protocol in sequential order, mimicking a recipe for reproducibility. It begins with an overview of the study design (e.g., randomized controlled trial, cohort study) and proceeds to detailed steps, incorporating timelines, durations, and any deviations from standard practices. Controls are explicitly described, such as sham procedures in intervention trials or negative controls in bench experiments, to isolate variables. Randomization and blinding protocols are embedded here, with mechanisms like sealed envelopes or third-party allocation to conceal group assignments. For multi-phase studies, each phase is delineated, including safety monitoring or interim checks. Flowcharts or diagrams often illustrate complex processes, such as participant flow in trials, showing enrollment, allocation, follow-up, and analysis stages. This visual aid enhances clarity without adding interpretive text. The protocol's fidelity—how adherence was monitored—is noted, ensuring the reported methods reflect actual execution.[18][19]Data Collection
This part specifies how data were gathered, including instruments, timing, and locations, to demonstrate reliability and completeness. Questionnaires or surveys are described with validation references (e.g., "The SF-36 Health Survey, version 2.0"), administration modes (in-person, online), and response rates. In observational studies, protocols for recording variables like physiological measurements detail tools (e.g., calibrated sphygmomanometers) and standardization to reduce measurement error. For longitudinal data, intervals between assessments are stated, along with strategies for handling dropouts, such as intention-to-treat principles. In clinical settings, concomitant care or co-interventions are documented to contextualize influences on data quality. All collection methods prioritize objectivity, with training for data collectors if inter-rater variability is a concern. This ensures the raw data's traceability back to the methods.[16][18]Statistical Analysis Methods
The final subsection outlines data processing and analytical techniques, providing enough detail for verification without delving into results. Data preparation steps, such as cleaning, normalization, or handling missing values (e.g., multiple imputation), are explained. Software and versions are cited, alongside specific tests: for example, "Two-tailed t-tests were performed using R version 4.2.1 (R Core Team) to compare group means, with significance at α = 0.05." Assumptions underlying tests (e.g., normality checked via Shapiro-Wilk) and adjustments for multiple comparisons (e.g., Bonferroni correction) are justified. Sample size considerations tie back to power calculations, ensuring the analysis aligns with the study's objectives. Subgroup analyses are pre-specified to avoid data dredging, and sensitivity analyses for robustness are noted. These methods enable independent computation of statistics, linking procedurally to the data collected. Ethical data handling, including anonymization, is affirmed here or in ethics statements.[13][19] An illustrative example from clinical trial reporting, guided by CONSORT standards, demonstrates these elements. In a hypothetical randomized trial evaluating a new antihypertensive drug, the Methods might read: "Participants. Adults aged 40-65 years with uncomplicated essential hypertension (systolic blood pressure 140-179 mmHg) were recruited from three urban clinics in New York City between January 2020 and December 2022. Exclusion criteria included secondary hypertension, cardiovascular disease, or pregnancy. A sample size of 200 per group was calculated to detect a 10 mmHg difference in systolic pressure () with 90% power and 5% alpha (, ), assuming a standard deviation () of 30 mmHg, using the formula .[20] Institutional Review Board approval was obtained from Mount Sinai Hospital (protocol #2020-045), and all participants provided written informed consent.[18][17] Interventions. Participants were randomized 1:1 to receive either the study drug (DrugX, 50 mg daily oral tablet, manufactured by PharmaCorp, lot #ABC123) or placebo using a computer-generated block randomization sequence (block size 4) via REDCap software version 12.0. Blinding was maintained for participants, clinicians, and outcome assessors through identical packaging. Procedures. Following baseline assessment, interventions were administered for 12 weeks, with clinic visits at weeks 4, 8, and 12. Blood pressure was measured in triplicate using an Omron HEM-7120 device after 5 minutes of rest. Adherence was monitored via pill counts and self-report (>80% threshold for continuation). A CONSORT flow diagram (Figure 1) depicts recruitment: 1,200 screened, 800 eligible, 400 randomized (200 per group), with 10% loss to follow-up due to non-compliance. Data Collection. Primary outcome (change in systolic blood pressure) and secondary outcomes (diastolic pressure, adverse events) were recorded electronically. Safety data were collected via standardized forms. Statistical Analysis. Intention-to-treat analysis was conducted using SAS version 9.4. Differences were assessed with mixed-effects models adjusting for baseline values and clinic site, with missing data imputed via last-observation-carried-forward." This excerpt highlights the section's role in enabling replication while maintaining ethical and methodological transparency. The length of the Methods varies with study complexity, typically 800-1,500 words, balancing detail with conciseness to support scientific scrutiny.[18][19]Results
Empirical research papers following the IMRAD structure, particularly original quantitative or qualitative studies presenting new data, should include a Results section to objectively report the findings from data collection and analysis. Review papers, theoretical papers, and literature reviews typically do not include a Results section as they do not present original empirical findings.[21] The Results section of an IMRAD-structured scientific paper presents the study's findings in an objective, factual manner, focusing solely on the data obtained from the methods without offering explanations, interpretations, or speculations. This section emphasizes clarity and precision, organizing the information to allow readers to grasp the outcomes independently before any analysis in subsequent parts. Typically written in the past tense, it prioritizes primary outcomes—such as main effects or key variables—before addressing secondary or exploratory results, ensuring a logical flow that mirrors the research questions or hypotheses.[3][22] Organization can follow a chronological sequence aligned with the methods described earlier or a thematic structure based on the significance of findings, often using subheadings to group related parameters for enhanced readability. For instance, results might first detail overall trends, supported by specific data points, before noting exceptions or additional observations. Tables, graphs, and figures are integral for conveying complex data efficiently, each labeled and numbered separately (e.g., Table 1, Figure 1), with captions positioned above tables and below figures to provide standalone context. The text references these visuals to highlight key patterns without duplicating their content, such as stating, "Table 1 indicates that the treatment group exhibited a mean score of 70.2 ± 12.3, compared to 52.1 ± 11.0 in the control group."[23][22] Key principles govern reporting to maintain objectivity: avoid interpretive language like "surprisingly high" or "unexpectedly low," and focus exclusively on what the data show. Primary outcomes receive detailed attention first, followed by secondary ones, with all findings reported comprehensively regardless of direction or magnitude. Statistical results must include descriptive measures such as means and standard deviations (SDs), alongside inferential statistics like p-values and effect sizes to quantify significance and magnitude. Exact p-values are preferred (e.g., p = 0.023 rather than p < 0.05), reported to two or three decimal places as appropriate, and paired with effect sizes like Cohen's d for t-tests to provide context beyond mere significance. For a two-sample independent t-test, the test statistic is calculated as: where and are the sample means, and are the variances, and and are the sample sizes.[24][25][26] Visual elements adhere to guidelines that promote accessibility and precision: figure legends must describe axes (e.g., independent variable on the x-axis, dependent on the y-axis), scale units, error bars (typically representing SD or standard error of the mean), and any statistical annotations, ensuring the graphic stands alone without needing the text for comprehension. Redundancy is minimized by using visuals for raw or detailed data presentation while reserving the narrative for synthesizing trends, such as "Figure 2 illustrates a significant increase in response rates across time points, with error bars denoting ±1 SD." Tables should summarize aggregated data, avoiding raw listings, and include footnotes for p-values or other details to avoid cluttering the main body.[22][23] To illustrate, consider a hypothetical experiment evaluating the impact of an intervention on cognitive scores using pre- and post-measurements analyzed via paired t-tests to assess within-group changes from pre- to post-test. The following table presents representative results (assuming n=30 per group):| Group | Pre-Test Mean (SD) | Post-Test Mean (SD) | t(29) | p-value | Cohen's d (effect size) |
|---|---|---|---|---|---|
| Control | 50.5 (9.8) | 51.2 (10.1) | 0.35 | 0.732 | 0.06 |
| Treatment | 49.8 (10.2) | 68.4 (11.5) | 5.07 | <0.001 | 0.92 |
