Recent from talks
Contribute something
Nothing was collected or created yet.
Research
View on Wikipedia
| Part of a series on |
| Research |
|---|
| Philosophy portal |
| Communication |
|---|
| General aspects |
| Fields |
| Disciplines |
| Categories |
Research is creative and systematic work undertaken to increase the stock of knowledge.[1] It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to controlling sources of bias and error. These activities are characterized by accounting and controlling for biases. A research project may be an expansion of past work in the field. To test the validity of instruments, procedures, or experiments, research may replicate elements of prior projects or the project as a whole.
The primary purposes of basic research (as opposed to applied research) are documentation, discovery, interpretation, and the research and development (R&D) of methods and systems for the advancement of human knowledge. Approaches to research depend on epistemologies, which vary considerably both within and between humanities and sciences. There are several forms of research: scientific, humanities, artistic, economic, social, business, marketing, practitioner research, life, technological, etc. The scientific study of research practices is known as meta-research.
A researcher is a person who conducts research, especially in order to discover new information or to reach a new understanding.[2] In order to be a social researcher or a social scientist, one should have enormous knowledge of subjects related to social science that they are specialized in. Similarly, in order to be a natural science researcher, the person should have knowledge of fields related to natural science (physics, chemistry, biology, astronomy, zoology and so on). Professional associations provide one pathway to mature in the research profession.[3]
Etymology
[edit]
The word research is derived from the Middle French "recherche", which means "to go about seeking", the term itself being derived from the Old French term "recerchier," a compound word from "re-" + "cerchier", or "sercher", meaning 'search'.[5] The earliest recorded use of the term was in 1577.[5]
Definitions
[edit]Research has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it.
Research, in its simplest terms, is searching for knowledge and searching for truth. In a formal sense, it is a systematic study of a problem attacked by a deliberately chosen strategy, which starts with choosing an approach to preparing a blueprint (design) and acting upon it in terms of designing research hypotheses, choosing methods and techniques, selecting or developing data collection tools, processing the data, interpretation, and ending with presenting solution(s) of the problem.[6]
Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question.[7][page needed]
The Merriam-Webster Online Dictionary defines research more generally to also include studying already existing knowledge: "studious inquiry or examination; especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws".[5]
Forms of research
[edit]Original research
[edit]Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge rather than present the existing knowledge in a new form (e.g., summarized or classified).[8][9] Original research can take various forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e.g., in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher.[10]
The degree of originality of the research is among the major criteria for articles to be published in academic journals and usually established by means of peer review.[11] Graduate students are commonly required to perform original research as part of a dissertation.[12]
Scientific research
[edit]This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|



Scientific research is a systematic way of gathering data and harnessing curiosity.[citation needed] This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research may be funded by public authorities, charitable organizations, and private organizations. Scientific research can be subdivided by discipline.
Generally, research is understood to follow a certain structural process. Though the order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied:
- Observations and formation of the topic: Consists of the subject area of one's interest and following that subject area to conduct subject-related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic.
- Hypothesis: A testable prediction which designates the relationship between two or more variables.
- Conceptual definition: Description of a concept by relating it to other concepts.
- Operational definition: Details in regards to defining the variables and how they will be measured/assessed in the study.
- Gathering of data: Consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable.
- Analysis of data: Involves breaking down the individual pieces of data to draw conclusions about it.
- Data Interpretation: This can be represented through tables, figures, and pictures, and then described in words.
- Test, revising of hypothesis
- Conclusion, reiteration if necessary
A common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true.
A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables.
Research in the humanities
[edit]Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory.[13]
Artistic research
[edit]Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth.
The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines.[14] One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis.[15]
Artistic research has been defined by the School of Dance and Circus (Dans och Cirkushögskolan, DOCH), Stockholm in the following manner – "Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context."[16] Artistic research aims to enhance knowledge and understanding with presentation of the arts.[17] A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception.[18] For a survey of the central problematics of today's artistic research, see Giaco Schiesser.[19]
According to artist Hakan Topal, in artistic research, "perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities".[20] Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research.[21]
The Society for Artistic Research (SAR) publishes the triannual Journal for Artistic Research (JAR),[22][23] an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the Research Catalogue (RC),[24][25][26] a searchable, documentary database of artistic research, to which anyone can contribute.
Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art.[27]
In 2016, the European League of Institutes of the Arts launched The Florence Principles' on the Doctorate in the Arts.[28] The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of the European University Association name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD. The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR.
Historical research
[edit]
The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are part of most formal historical research:[29]
- Identification of origin date
- Evidence of localization
- Recognition of authorship
- Analysis of data
- Identification of integrity
- Attribution of credibility
Documentary research
[edit]Steps in conducting research
[edit]

Research is often conducted using the hourglass model structure of research.[30] The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are:[31]
- Identification of research problem
- Literature review
- Specifying the purpose of research
- Determining specific research questions
- Specification of a conceptual framework, sometimes including a set of hypotheses[32]
- Choice of a methodology (for data collection)
- Data collection
- Verifying data
- Analyzing and interpreting the data
- Reporting and evaluating research
- Communicating the research findings and, possibly, recommendations
The steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps.[33] Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study.[34] The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving "up" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted.
Rudolph Rummel says, "... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results."[35]
Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, "If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!"[36]
Research methods
[edit]

The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure):
- Exploratory research, which helps to identify and define a problem or question.
- Constructive research, which tests theories and proposes solutions to a problem or question.
- Empirical research, which tests the feasibility of a solution using empirical evidence.
There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer:
Qualitative research Qualitative research refers to much more subjective non-quantitative, use different methods of collecting data, analyzing data, interpreting data for meanings, definitions, characteristics, symbols metaphors of things. Qualitative research further classified into the following types: Ethnography: This research mainly focus on culture of group of people which includes share attributes, language, practices, structure, value, norms and material things, evaluate human lifestyle. Ethno: people, Grapho: to write, this disciple may include ethnic groups, ethno genesis, composition, resettlement and social welfare characteristics. Phenomenology: It is very powerful strategy for demonstrating methodology to health professions education as well as best suited for exploring challenging problems in health professions educations.[38] In addition, PMP researcher Mandy Sha argued that a project management approach is necessary to control the scope, schedule, and cost related to qualitative research design, participant recruitment, data collection, reporting, as well as stakeholder engagement.[39][40]
Quantitative research Quantitative research involves systematic empirical investigation of quantitative properties and phenomena and their relationships, by asking a narrow question and collecting numerical data to analyze it utilizing statistical methods. The quantitative research designs are experimental, correlational, and survey (or descriptive).[7] Statistics derived from quantitative research can be used to establish the existence of associative or causal relationships between variables. Quantitative research is linked with the philosophical and theoretical stance of positivism.
The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that can be summarized, compared, and generalized to larger populations if the data are collected using proper sampling and data collection strategies.[41] Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest.[41]
If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment).[citation needed] If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants.[42]
In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data.[41] Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible.[43]
Mixed-method research, i.e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common.[44] This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights.[45]
Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed.
Non-empirical research Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method.
A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects.
Research ethics
[edit]Research ethics is a discipline within the study of applied ethics. Its scope ranges from general scientific integrity and misconduct to the treatment of human and animal subjects. The social responsibilities of scientists and researchers are not traditionally included and are less well defined.
The discipline is most developed in medical research. Beyond the issues of falsification, fabrication, and plagiarism that arise in every scientific field, research design in human subject research and animal testing are the areas that raise ethical questions most often.
The list of historic cases includes many large-scale violations and crimes against humanity such as Nazi human experimentation and the Tuskegee syphilis experiment which led to international codes of research ethics. No approach has been universally accepted, but typically cited codes are the 1947 Nuremberg Code, the 1964 Declaration of Helsinki, and the 1978 Belmont Report.
Today, research ethics committees, such as those of the US, UK, and EU, govern and oversee the responsible conduct of research. One major goal being to reduce questionable research practices.
Research in other fields such as social sciences, information technology, biotechnology, or engineering may generate ethical concerns.Problems in research
[edit]Metascience
[edit]Metascience is the study of research through the use of research methods. Also known as "research on research", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields.[46]
Replication crisis
[edit]Academic bias
[edit]Funding bias
[edit]Publication bias
[edit]Non-western methods
[edit]In many disciplines, Western methods of conducting research are predominant.[50] Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the scientific lacuna in culturally sensitive methods of data collection.[51] Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, "Hua Oranga" was created as a criterion for psychological evaluation in Māori populations, and is based on dimensions of mental health important to the Māori people – "taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)".[52]
Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for "the need [for] a plural university world".[53] Marginson argues that the East Asian Confucian model could take over the Western model.
This could be due to changes in funding for research both in the East and the West. Focused on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion.[53] In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some [who?] say may lead to the future decline of Western dominance in research.
Language
[edit]Research is often biased in the languages that are preferred (linguicism) and the geographic locations where research occurs. Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals.[54] Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference.[55] Patterns of geographic bias also show a relationship with linguicism: countries whose official languages are French or Arabic are far less likely to be the focus of single-country studies than countries with different official languages. Within Africa, English-speaking countries are more represented than other countries.[56]
Generalizability
[edit]Generalization is the process of more broadly applying the valid results of one study.[57] Studies with a narrow scope can result in a lack of generalizability, meaning that the results may not be applicable to other populations or regions. In comparative politics, this can result from using a single-country study, rather than a study design that uses data from multiple countries. Despite the issue of generalizability, single-country studies have risen in prevalence since the late 2000s.[56]
For comparative politics, Western countries are over-represented in single-country studies, with heavy emphasis on Western Europe, Canada, Australia, and New Zealand. Since 2000, Latin American countries have become more popular in single-country studies. In contrast, countries in Oceania and the Caribbean are the focus of very few studies.[56]
Publication peer review
[edit]Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review.[58] It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe.[55]
Open research
[edit]The open research, open science and open access movements assume that all information generally deemed useful should be free and belongs to a "public domain", that of "humanity".[59] This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships.[59] There is alleged to be a double standard in the Western knowledge system. On the one hand, "digital right management" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i.e. indigenous communities) this is denounced as "access control" and reprehended as censorship.[59]
Professionalisation
[edit]The examples and perspective in this section may not represent a worldwide view of the subject. (January 2014) |
In several national and private academic systems, the professionalisation of research has resulted in formal job titles.
In Russia
[edit]In present-day Russia, and some other countries of the former Soviet Union, the term researcher (Russian: Научный сотрудник, nauchny sotrudnik) has been used both as a generic term for a person who has been carrying out scientific research, and as a job position within the frameworks of the Academy of Sciences, universities, and in other research-oriented establishments.
The following ranks are known:
- Junior Researcher (Junior Research Associate)
- Researcher (Research Associate)
- Senior Researcher (Senior Research Associate)
- Leading Researcher (Leading Research Associate)[60]
- Chief Researcher (Chief Research Associate)
Publishing
[edit]
Academic publishing is a system that is necessary for academic scholars to peer review the work and make it available for a wider audience. The system varies widely by field and is also always changing, if often slowly. Most academic work is published in journal article or book form. There is also a large body of research that exists in either a thesis or dissertation form. These forms of research can be found in databases explicitly for theses and dissertations. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine. Most established academic fields have their own scientific journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields, from the print to the electronic format. A study suggests that researchers should not give great consideration to findings that are not replicated frequently.[61] It has also been suggested that all published studies should be subjected to some measure for assessing the validity or reliability of its procedures to prevent the publication of unproven findings.[62] Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access.[63] There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web.
Research statistics and funding
[edit]Most funding for scientific research comes from three major sources: corporate research and development departments; private foundations; and government research councils such as the National Institutes of Health in the US[64] and the Medical Research Council in the UK. These are managed primarily through universities and in some cases through military contractors. Many senior researchers (such as group leaders) spend a significant amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research but also as a source of merit. The Social Psychology Network provides a comprehensive list of U.S. Government and private foundation funding sources.
The total number of researchers (full-time equivalents) per million inhabitants for individual countries is shown in the following table.
Research expenditure by type of research as a share of GDP for individual countries is shown in the following table.
| Country | Research expenditure as a share of GDP by type of research (%), 2018[66] | |||
|---|---|---|---|---|
| Basic | Applied | Development | ||
| 0.01 | 0.27 | 0.02 | ||
| 0.14 | 0.27 | 0.12 | ||
| 0.54 | 1.00 | 1.46 | ||
| 0.30 | 1.24 | 1.16 | ||
| 0.08 | 0.47 | 0.20 | ||
| 0.10 | 0.14 | 0.08 | ||
| 0.12 | 0.24 | 1.82 | ||
| 0.10 | 0.07 | 0.02 | ||
| 0.33 | 0.28 | 0.25 | ||
| 0.08 | 0.30 | 0.18 | ||
| 0.50 | 0.77 | 0.66 | ||
| 0.56 | 0.95 | 1.54 | ||
| 0.35 | 0.28 | 0.66 | ||
| 0.50 | 0.92 | 0.78 | ||
| 0.35 | 0.37 | 0.41 | ||
| 0.26 | 0.30 | 0.78 | ||
| 0.43 | 0.95 | 0.66 | ||
| 0.10 | 0.15 | 0.13 | ||
| 0.22 | 0.42 | 0.55 | ||
| 0.31 | 0.58 | 0.49 | ||
| 0.52 | 0.51 | 3.93 | ||
| 0.41 | 0.62 | 2.10 | ||
| 0.02 | 0.07 | 0.03 | ||
| 0.00 | 0.06 | 0.00 | ||
| 0.16 | 0.22 | 0.13 | ||
| 0.24 | 0.38 | 0.28 | ||
| 0.48 | 0.49 | 0.33 | ||
| 0.42 | 0.81 | 0.21 | ||
| 0.30 | 0.19 | 0.09 | ||
| 0.03 | 0.12 | 0.02 | ||
| 0.10 | 0.09 | 0.12 | ||
| 0.10 | 0.21 | 0.04 | ||
| 0.52 | 0.87 | 0.60 | ||
| 0.34 | 0.55 | 0.48 | ||
| 0.09 | 0.23 | 0.05 | ||
| 0.38 | 0.79 | 0.93 | ||
| 0.30 | 0.18 | 0.55 | ||
| 0.29 | 0.51 | 0.53 | ||
| 0.10 | 0.31 | 0.09 | ||
| 0.15 | 0.21 | 0.65 | ||
| 0.29 | 0.34 | 0.29 | ||
| 0.46 | 0.61 | 0.87 | ||
| 0.33 | 0.20 | 0.30 | ||
| 0.33 | 0.82 | 0.71 | ||
| 0.22 | 0.44 | 0.17 | ||
| 0.68 | 1.06 | 3.07 | ||
| 0.26 | 0.50 | 0.45 | ||
| 1.41 | 1.09 | 0.88 | ||
| 0.10 | 0.27 | 0.64 | ||
| 0.11 | 0.10 | 0.27 | ||
| 0.30 | 0.74 | 0.64 | ||
| 0.47 | 0.56 | 1.80 | ||
| 0.07 | 0.30 | 0.04 | ||
See also
[edit]- Advertising research
- European Charter for Researchers
- Internet research
- Laboratory
- List of words ending in ology
- Market research
- Marketing research
- Operations research
- Participatory action research
- Psychological research methods
- Research integrity
- Research-intensive cluster
- Research organization
- Research proposal
- Research university
- Scholarly research
- Secondary research
- Social research
- Society for Artistic Research
- Timeline of the history of the scientific method
- Undergraduate research
Notes
[edit]References
[edit]- ^ OECD (2015). Frascati Manual. The Measurement of Scientific, Technological and Innovation Activities. doi:10.1787/9789264239012-en. hdl:20.500.12749/13290. ISBN 978-9264238800. Archived from the original on 5 June 2020. Retrieved 4 April 2020.
- ^ "Researcher". Cambridge Dictionary. Retrieved 13 October 2024.
- ^ Sha, Mandy (14 May 2019). "Professional Association and Pathways to Leadership in Our Profession". Survey Practice. 12 (1): 1–6. doi:10.29115/SP-2018-0039.
- ^ "The Origins of Science Archived 3 March 2003 at the Wayback Machine". Scientific American Frontiers.
- ^ a b c "Research". Merriam-Webster. Archived from the original on 18 October 2018. Retrieved 20 May 2018.
- ^ Grover, Vijey (2015). "RESEARCH APPROACH: AN OVERVIEW". Golden Research Thoughts. 4.
- ^ a b Creswell 2008.
- ^ "What is Original Research? Original research is considered a primary source". Thomas G. Carpenter Library, University of North Florida. Archived from the original on 9 July 2011. Retrieved 9 August 2014.
- ^ Rozakis, Laurie (2007). Schaum's Quick Guide to Writing Great Research Papers. McGraw Hill Professional. ISBN 978-0071511223 – via Google Books.
- ^ Singh, Michael; Li, Bingyi (6 October 2009). "Early career researcher originality: Engaging Richard Florida's international competition for creative workers" (PDF). Centre for Educational Research, University of Western Sydney. p. 2. Archived (PDF) from the original on 10 April 2011. Retrieved 12 January 2012.
- ^ Callaham, Michael; Wears, Robert; Weber, Ellen L. (2002). "Journal Prestige, Publication Bias, and Other Characteristics Associated With Citation of Published Studies in Peer-Reviewed Journals". JAMA. 287 (21): 2847–50. doi:10.1001/jama.287.21.2847. PMID 12038930.
- ^ US Department of Labor (2006). Occupational Outlook Handbook, 2006–2007 edition. Mcgraw-hill. ISBN 978-0071472883 – via Google Books.
- ^ Roffee, James A; Waling, Andrea (18 August 2016). "Resolving ethical challenges when researching with minority and vulnerable populations: LGBTIQ victims of violence, harassment and bullying". Research Ethics. 13 (1): 4–22. doi:10.1177/1747016116658693.
- ^ Lesage, Dieter (Spring 2009). "Who's Afraid of Artistic Research? On measuring artistic research output" (PDF). Art & Research. 2 (2). ISSN 1752-6388. Archived (PDF) from the original on 5 October 2011. Retrieved 14 August 2011.
- ^ Eisner, E. W. (1981). "On the Differences between Scientific and Artistic Approaches to Qualitative Research". Educational Researcher. 10 (4): 5–9. doi:10.3102/0013189X010004005. JSTOR 1175121.
- ^ Unattributed. "Artistic research at DOCH". Dans och Cirkushögskolan (website). Archived from the original on 5 November 2011. Retrieved 14 August 2011.
- ^ Schwab, M. (2009). "Draft Proposal". Journal for Artistic Research. Bern University of the Arts.
- ^ Julian Klein (2010). "What is artistic research?". Archived from the original on 13 May 2021. Retrieved 15 June 2021.
- ^ Schiesser, G. (2015). What is at stake – Qu'est ce que l'enjeu? Paradoxes – Problematics – Perspectives in Artistic Research Today, in: Arts, Research, Innovation and Society. Eds. Gerald Bast, Elias G. Carayannis [= ARIS, Vol. 1]. Wien/New York: Springer. pp. 197–210.
- ^ Topal, H. (2014). "Whose Terms? A Glossary for Social Practice: Research". newmuseum.org. Archived from the original on 9 September 2014.
- ^ Hoffman, A. (2003). Research for Writers, pp. 4–5. London: A&C Black Publishers Limited.
- ^ "Swiss Science and Technology Research Council (2011), Research Funding in the Arts" (PDF).[permanent dead link]
- ^ Henk Borgdorff (2012), The Conflict of the Faculties. Perspectives on Artistic Research and Academia (Chapter 11: The Case of the Journal for Artistic Research), Leiden: Leiden University Press.
- ^ Schwab, Michael, and Borgdorff, Henk, eds. (2014), The Exposition of Artistic Research: Publishing Art in Academia, Leiden: Leiden University Press.
- ^ Wilson, Nick and van Ruiten, Schelte / ELIA, eds. (2013), SHARE Handbook for Artistic Research Education, Amsterdam: Valand Academy, p. 249.
- ^ Hughes, Rolf: "Leap into Another Kind: International Developments in Artistic Research", in Swedish Research Council, ed. (2013), Artistic Research Then and Now: 2004–2013, Yearbook of AR&D 2013, Stockholm: Swedish Research Council.
- ^ Leavy, Patricia (2015). Methods Meets Art (2nd ed.). New York: Guilford. ISBN 978-1462519446.
- ^ Rahmat, Omarkhil. "Florence principles, 2016" (PDF). Archived from the original (PDF) on 21 December 2016. Retrieved 23 December 2016.
- ^ Garraghan, Gilbert J. (1946). A Guide to Historical Method. New York: Fordham University Press. p. 168. ISBN 978-0-8371-7132-6.
{{cite book}}: ISBN / Date incompatibility (help) - ^ Trochim, W.M.K, (2006). Research Methods Knowledge Base.
- ^ Creswell 2008, pp. 8–9.
- ^ Shields, Patricia M.; Rangarjan, N. (2013). A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. Stillwater, OK: New Forums Press. ISBN 9781581072471.[permanent dead link]
- ^ Gauch, Jr., H.G. (2003). Scientific method in practice. Cambridge, UK: Cambridge University Press. 2003 ISBN 0-521-81689-0 (page 3)
- ^ Rocco, T.S., Hatcher, T., & Creswell, J.W. (2011). The handbook of scholarly writing and publishing. San Francisco, CA: John Wiley & Sons. 2011 ISBN 978-0-470-39335-2
- ^ "QUESTIONS ABOUT FREEDOM, DEMOCIDE, AND WAR". www.hawaii.edu. Archived from the original on 4 January 2012. Retrieved 25 November 2011.
- ^ Plato, & Bluck, R. S. (1962). Meno. Cambridge, UK: University Press.
- ^ Sullivan P (13 April 2005). "Maurice R. Hilleman dies; created vaccines". The Washington Post. Archived from the original on 20 October 2012. Retrieved 10 September 2017.
- ^ Pawar, Neelam (December 2020). "6. Type of Research and Type Research Design". Research Methodology: An Overview. Vol. 15. KD Publications. pp. 46–57. ISBN 978-81-948755-8-1.
- ^ Sha, Mandy; Childs, Jennifer Hunter (1 August 2014). "Applying a project management approach to survey research projects that use qualitative methods". Survey Practice. 7 (4): 1–8. doi:10.29115/SP-2014-0021. Archived from the original on 25 November 2023. Retrieved 3 December 2023.
- ^ Sha, Mandy; Pan, Yuling (1 December 2013). "Adapting and Improving Methods to Manage Cognitive Pretesting of Multilingual Survey Instruments". Survey Practice. 6 (4): 1–8. doi:10.29115/SP-2013-0024.
- ^ a b c Eyler, Amy A. (2020). Research Methods for Public Health. New York: Springer Publishing Company. ISBN 978-0-8261-8206-7. OCLC 1202451096.
- ^ "Data Collection Methods". uwec.edu. Archived from the original on 20 October 2011. Retrieved 26 October 2011.
- ^ Kara 2012, p. 102.
- ^ Kara 2012, p. 114.
- ^ Creswell, John W. (2014). Research design : qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks: Sage. ISBN 978-1-4522-2609-5. Archived from the original on 16 November 2023. Retrieved 11 December 2018.
- ^ Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2 October 2015). "Meta-research: Evaluation and Improvement of Research Methods and Practices". PLOS Biology. 13 (10): –1002264. doi:10.1371/journal.pbio.1002264. ISSN 1545-7885. PMC 4592065. PMID 26431313.
- ^ John S (8 December 2017). Scientific Method. New York, NY: Routledge. doi:10.4324/9781315100708. ISBN 978-1-315-10070-8. S2CID 201781341.
- ^ Krimsky Sheldon (2013). "Do Financial Conflicts of Interest Bias Research? An Inquiry into the "Funding Effect" Hypothesis" (PDF). Science, Technology, & Human Values. 38 (4): 566–587. doi:10.1177/0162243912456271. S2CID 42598982. Archived from the original (PDF) on 17 October 2012. Retrieved 15 September 2015.
- ^ Song, F.; Parekh, S.; Hooper, L.; Loke, Y. K.; Ryder, J.; Sutton, A. J.; Hing, C.; Kwok, C. S.; Pang, C.; Harvey, I. (2010). "Dissemination and publication of research findings: An updated review of related biases". Health Technology Assessment. 14 (8): iii, iix–xi, iix–193. doi:10.3310/hta14080. PMID 20181324.
- ^ Reverby, Susan M. (1 April 2012). "Zachary M. Schrag. Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009. Baltimore: Johns Hopkins University Press. 2010. Pp. xii, 245. $45.00". The American Historical Review. 117 (2): 484–485. doi:10.1086/ahr.117.2.484-a. ISSN 0002-8762.
- ^ Smith, Linda Tuhiwai (2012). Decolonizing Methodologies: Research and Indigenous Peoples (2nd ed.). London: Zed Books. ISBN 978-1848139503. Archived from the original on 25 October 2018. Retrieved 24 October 2018.
- ^ Stewart, Lisa (2012). "Commentary on Cultural Diversity Across the Pacific: The Dominance of Western Theories, Models, Research and Practice in Psychology". Journal of Pacific Rim Psychology. 6 (1): 27–31. doi:10.1017/prp.2012.1.
- ^ a b "Sun sets on Western dominance as East Asian Confucian model takes lead". 24 February 2011. Archived from the original on 20 September 2016. Retrieved 29 August 2016.
- ^ Canagarajah, A. Suresh (1 January 1996). "From Critical Research Practice to Critical Research Reporting". TESOL Quarterly. 30 (2): 321–331. doi:10.2307/3588146. JSTOR 3588146.
- ^ a b Canagarajah, Suresh (October 1996). "'Nondiscursive' Requirements in Academic Publishing, Material Resources of Periphery Scholars, and the Politics of Knowledge Production". Written Communication. 13 (4): 435–472. doi:10.1177/0741088396013004001. S2CID 145250687.
- ^ a b c Pepinsky, Thomas B. (2019). "The Return of the Single-Country Study". Annual Review of Political Science. 22: 187–203. doi:10.1146/annurev-polisci-051017-113314.
- ^ Kukull, W. A.; Ganguli, M. (2012). "Generalizability: The trees, the forest, and the low-hanging fruit". Neurology. 78 (23): 1886–1891. doi:10.1212/WNL.0b013e318258f812. PMC 3369519. PMID 22665145.
- ^ "Peer Review of Scholarly Journal". www.PeerViewer.com. June 2017. Archived from the original on 30 July 2017. Retrieved 29 July 2017.
- ^ a b c Christen, Kimberly (2012). "Does Information Really Want to be Free? Indigenous Knowledge Systems and the Question of Openness". International Journal of Communication. 6. Archived from the original on 15 July 2017. Retrieved 7 June 2017.
- ^ "Ведущий научный сотрудник: должностные обязанности". www.aup.ru. Archived from the original on 1 April 2010. Retrieved 22 January 2014.
- ^ Heiner Evanschitzky, Carsten Baumgarth, Raymond Hubbard and J. Scott Armstrong (2006). "Replication Research in Marketing Revisited: A Note on a Disturbing Trend" (PDF). Archived from the original (PDF) on 20 June 2010. Retrieved 10 January 2012.
{{cite web}}: CS1 maint: multiple names: authors list (link) - ^ J. Scott Armstrong & Peer Soelberg (1968). "On the Interpretation of Factor Analysis" (PDF). Psychological Bulletin. 70 (5): 361–364. doi:10.1037/h0026434. S2CID 25687243. Archived from the original (PDF) on 21 June 2010. Retrieved 11 January 2012.
- ^ J. Scott Armstrong & Robert Fildes (2006). "Monetary Incentives in Mail Surveys" (PDF). International Journal of Forecasting. 22 (3): 433–441. doi:10.1016/j.ijforecast.2006.04.007. S2CID 154398140. Archived from the original (PDF) on 20 June 2010. Retrieved 11 January 2012.
- ^ "Home | RePORT". report.nih.gov. Archived from the original on 21 May 2021. Retrieved 22 May 2021.
- ^ Research input and output worldwide, various years since 2014, Statistical Annex, by country, Table C2: Total researchers and researchers per million inhabitants, 2015 and 2018
- ^ Research input and output worldwide, various years since 2014, Statistical Annex, by country, Table B1: Research expenditure as a share of GDP and in purchasing power parity dollars (PPP$), 2015–2018, year 2018
Sources
[edit]- Creswell, John W. (2008). Educational Research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson. ISBN 978-0-13-613550-0.
- Kara, Helen (2012). Research and Evaluation for Busy Practitioners: A Time-Saving Guide. Bristol: The Policy Press. ISBN 978-1-44730-115-8.
Further reading
[edit]- Groh, Arnold (2018). Research Methods in Indigenous Contexts. New York: Springer. ISBN 978-3-319-72774-5.
- Cohen, N.; Arieli, T. (2011). "Field research in conflict environments: Methodological challenges and snowball sampling". Journal of Peace Research. 48 (4): 423–436. doi:10.1177/0022343311405698. S2CID 145328311.
- Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge.
- Talja, Sanna and Pamela J. Mckenzie (2007). Editor's Introduction: Special Issue on Discursive Approaches to Information Seeking in Context, The University of Chicago Press.
External links
[edit]Research
View on GrokipediaDefinitions and Etymology
Etymology
The English word "research" entered usage in the mid-16th century, around the 1570s, initially denoting a "close search or inquiry" conducted with thoroughness.[15] It derives directly from the Middle French noun recherche, meaning "a searching" or "to go about seeking," which itself stems from the Old French verb recerchier or recercer, implying an intensive or repeated investigation.[15][16] This Old French term breaks down to the intensive prefix re- (indicating repetition or intensity, akin to "again" or "back") combined with cerchier, meaning "to search" or "to seek," ultimately tracing to the Latin circare, "to go around" or "to wander about in a circle," evoking a sense of circling back for deeper examination.[15][16] By the 17th century, the term had solidified in English to encompass systematic inquiry, reflecting its connotation of deliberate, iterative pursuit rather than casual looking.[15]Core Definitions
Research is defined as a systematic investigation, including research development, testing, and evaluation, that is designed to develop or contribute to generalizable knowledge.[17][18] This definition, originating from U.S. federal regulations such as the Common Rule (45 CFR 46), emphasizes a structured, methodical approach rather than ad hoc exploration, distinguishing research from casual inquiry by requiring a predetermined plan for data collection, analysis, and interpretation to yield findings applicable beyond the immediate context.[19][20] In academic and scientific contexts, research entails the rigorous collection of empirical data or logical analysis to test hypotheses, validate theories, or uncover causal relationships, often involving replicable methods to minimize bias and ensure reliability.[21][22] Unlike mere inquiry, which may involve open-ended questioning for personal understanding, research demands formal protocols, such as peer review and statistical validation, to produce verifiable results that advance collective knowledge.[23][24] Key elements include systematicity, referring to a predefined methodology (e.g., experimental design or archival review) applied consistently; investigation, encompassing observation, experimentation, or theoretical modeling; and generalizability, where outcomes must hold potential for broader application, excluding purely internal or operational activities like routine quality assessments.[25][26] This framework ensures research prioritizes causal realism—identifying true mechanisms over correlative assumptions—while empirical grounding prevents unsubstantiated claims, as seen in fields from physics to social sciences where falsifiability remains a cornerstone criterion.[7]Philosophical Foundations
Epistemology, the philosophical study of knowledge, its nature, sources, and limits, underpins research by addressing how investigators justify claims as true.[27] Research paradigms derive from epistemological stances, such as positivism, which posits that knowledge arises from observable, verifiable phenomena through empirical methods, contrasting with interpretivism, which emphasizes subjective meanings derived from human experience.[28] Ontology complements this by examining the nature of reality—whether objective and independent (realism) or socially constructed (relativism)—influencing whether research prioritizes causal mechanisms or interpretive contexts.[29] Ancient foundations trace to Aristotle (384–322 BCE), who integrated empirical observation with logical deduction in works like Physics and Nicomachean Ethics, laying groundwork for systematic inquiry into natural causes.[30] The Scientific Revolution advanced this through empiricism, championed by Francis Bacon (1561–1626), who in Novum Organum (1620) promoted inductive methods to derive general laws from particular observations, critiquing deductive scholasticism for impeding discovery.[31] Rationalism, articulated by René Descartes (1596–1650) in Meditations on First Philosophy (1641), stressed innate ideas and deductive reasoning from self-evident truths, exemplified by his method of doubt to establish certainty.[32] Modern philosophy of science synthesizes these traditions, with Karl Popper (1902–1994) introducing falsifiability in The Logic of Scientific Discovery (1934) as the demarcation criterion for scientific theories, emphasizing empirical refutation over mere confirmation to advance causal understanding.[30] This falsificationist approach counters inductivism's problem of infinite confirmation, prioritizing rigorous testing against reality. While academia often favors paradigms like Kuhn's paradigm shifts (1962), which highlight social influences on theory change, empirical evidence supports realism's focus on mind-independent structures, as untestable constructs risk pseudoscientific claims.[33] Institutional biases in peer review may undervalue dissenting causal models, yet truth-seeking demands scrutiny of such influences to preserve methodological integrity.[34]Forms and Classifications of Research
Original versus Derivative Research
Original research, also known as primary research, entails the direct collection and analysis of new data to address specific questions or test hypotheses, often through methods such as controlled experiments, surveys, or fieldwork.[35][36] This form of inquiry generates firsthand evidence, enabling researchers to draw conclusions grounded in empirical observations rather than preexisting datasets. For instance, a clinical trial measuring the efficacy of a novel drug in human subjects qualifies as original research, as it produces unpublished data on outcomes like recovery rates or side effects.[37] In academic publishing, original research appears in peer-reviewed journals as primary literature, where authors detail their methodology, results, and interpretations to contribute novel knowledge to the field.[37] Derivative research, synonymous with secondary research, involves the synthesis, interpretation, or reanalysis of data and findings already produced by others, without generating new primary data.[35][38] Common examples include literature reviews that compile and critique existing studies, meta-analyses that statistically aggregate results from multiple original investigations, or theoretical works that reinterpret historical data.[38] This approach relies on the quality and completeness of prior sources, which can introduce cumulative errors or overlooked biases if the foundational data is flawed or selectively reported.[39] While derivative efforts consolidate knowledge and identify patterns across studies—such as in systematic reviews assessing treatment effectiveness—they do not advance the empirical frontier independently.[38] The distinction between original and derivative research underscores differing contributions to knowledge accumulation: original work establishes causal links through direct evidence, whereas derivative work evaluates, contextualizes, or applies those links.[35][40] In practice, much published scholarship blends elements of both, but funding and prestige often favor original endeavors due to their potential for groundbreaking discoveries, though derivative analyses remain essential for validation and policy formulation.[41]| Aspect | Original Research | Derivative Research |
|---|---|---|
| Data Source | Newly collected (e.g., experiments, surveys) | Existing data from prior studies |
| Primary Goal | Generate novel evidence and insights | Synthesize, analyze, or reinterpret data |
| Examples | Field observations, lab trials | Meta-analyses, literature reviews |
| Strengths | Direct causality testing, reduced bias from synthesis | Identifies trends, cost-effective |
| Limitations | Resource-intensive, higher risk of error in novel methods | Dependent on source quality, potential propagation of flaws |
Scientific Research
Scientific research is the systematic investigation of natural phenomena through observation, experimentation, and analysis to generate new knowledge.[42] It involves the planned collection, interpretation, and evaluation of empirical data to contribute to scientific understanding.[43] Unlike derivative or non-empirical forms, scientific research prioritizes testable hypotheses and falsifiable predictions, as emphasized by philosopher Karl Popper's criterion that demarcates science from pseudoscience by requiring theories to be capable of being proven wrong through evidence.[44][45] Key characteristics of scientific research include empiricism, relying on observable and measurable evidence; objectivity, minimizing researcher bias through standardized methods; replicability, allowing independent verification of results; and systematicity, following structured procedures rather than ad hoc approaches.[46] These traits ensure that findings are provisional and subject to revision based on new data, fostering cumulative progress in knowledge.[47] The process adheres to the scientific method, typically comprising steps such as: making observations to identify a phenomenon; formulating a testable hypothesis; designing and conducting experiments to gather data; analyzing results statistically; and drawing conclusions while iterating if necessary.[48] This iterative cycle, often visualized as hypothesis testing followed by refinement or rejection, underpins advancements in fields like physics, chemistry, and biology.[49] Reproducibility is foundational, yet challenges persist, as evidenced by the replication crisis where many published results fail independent verification. For instance, a 2015 effort to replicate 100 psychology studies succeeded in only 36% of cases with statistically significant effects matching originals.[50] Surveys indicate nearly three-quarters of biomedical researchers acknowledge a reproducibility crisis, attributed partly to "publish or perish" incentives favoring novel over robust findings.[51] Such issues underscore the need for rigorous statistical practices and preregistration to mitigate biases in data interpretation and publication.[52]Non-Empirical Research Forms
Non-empirical research derives conclusions through deductive reasoning, logical analysis, and theoretical frameworks without collecting or analyzing observational data. This contrasts with empirical research, which relies on measurable phenomena observed in the natural world to test hypotheses and generate knowledge. Non-empirical methods emphasize a priori knowledge—truths independent of experience—and are foundational in disciplines where logical consistency supersedes sensory evidence.[53][54][55] In mathematics, non-empirical research predominates through the construction and proof of theorems from established axioms using formal logic, yielding results verifiable solely by deduction rather than experiment. For example, the proof of Fermat's Last Theorem by Andrew Wiles in 1994 demonstrated that no positive integers , , and satisfy for , achieved via modular elliptic curves and without empirical testing. Such proofs establish universal truths applicable across contexts, independent of physical reality.[56][57] Philosophical inquiry represents another core form, involving conceptual analysis, argumentation, and thought experiments to explore metaphysical, ethical, and epistemological questions. Thinkers like René Descartes employed methodological doubt in the 17th century to arrive at foundational certainties, such as "cogito ergo sum," through introspective reasoning rather than external observation. Contemporary non-empirical ethics research, for instance, uses argument-based methods to evaluate moral frameworks in technology, prioritizing logical coherence over data from human behavior.[57][58] Theoretical research in foundational sciences, such as certain aspects of logic or set theory, also falls under non-empirical forms, where models are refined deductively to uncover structural possibilities. While these methods provide robust, timeless insights—evident in mathematics' role underpinning physics—they face criticism for potential detachment from reality, as untested theories risk irrelevance without eventual empirical linkage, though pure domains like logic require no such validation.[59][60]Applied versus Basic Research
Basic research, also known as fundamental or pure research, seeks to expand the boundaries of human knowledge by investigating underlying principles and phenomena without a predetermined practical goal.[7] It prioritizes theoretical understanding, often through hypothesis testing and exploratory experiments, such as probing the properties of subatomic particles or genetic mechanisms.[61] In contrast, applied research directs efforts toward solving specific, real-world problems by building on existing knowledge to develop technologies, products, or processes, exemplified by engineering improvements in battery efficiency based on electrochemical principles.[7] [62] The modern distinction between these categories gained prominence in the mid-20th century, particularly through Vannevar Bush's 1945 report Science, the Endless Frontier, which positioned basic research as the "pacemaker of technological progress" essential for long-term innovation, while applied research translates discoveries into immediate utility.[63] Bush advocated for federal investment in basic research via institutions like the proposed National Science Foundation, arguing it fosters serendipitous breakthroughs that applied efforts alone cannot achieve.[64] This framework influenced U.S. science policy, embedding the dichotomy in funding mechanisms where basic research receives substantial public support—40% of U.S. basic research funding came from the federal government in 2022, compared to 37% from businesses—while applied research draws more from industry.[65] Earlier conceptual roots trace to 18th-century separations of "pure" science from utilitarian pursuits, but Bush's linear model—basic preceding applied—formalized it amid post-World War II expansion of government-sponsored science.[61] Methodologically, basic research emphasizes open-ended inquiry, replication, and peer-reviewed publication in journals, often yielding foundational theories like quantum mechanics, which underpin later applications in electronics.[66] Applied research, however, integrates interdisciplinary teams, prototyping, and iterative testing oriented toward measurable outcomes, such as clinical trials for drug development following basic pharmacological studies.[67] Empirical analyses of citation networks reveal that basic research generates broader, longer-term impacts, with high-citation basic papers influencing diverse fields over decades, whereas applied outputs cluster in narrower, short-term applications.[68] [66] Yet, the boundary is porous: feedback loops exist, as applied challenges refine basic theories, challenging the strict sequentiality of Bush's model.[69] Critics contend the distinction is subjective and policy-driven, potentially distorting resource allocation by undervaluing hybrid efforts where immediate applicability motivates fundamental inquiry.[70] For instance, National Institutes of Health data show that grants labeled "basic" often yield patentable insights, blurring lines and suggesting the categories serve administrative purposes more than causal realities of discovery.[71] Nonetheless, econometric studies affirm complementarity: investments in basic research enhance applied productivity by 20-30% in sectors like biotechnology, as foundational knowledge reduces uncertainty in downstream development.[68] This interdependence underscores that while applied research delivers tangible societal benefits—such as vaccines derived from virology basics—sustained progress requires prioritizing basic inquiry to avoid depleting the knowledge reservoir upon which applications depend.[72]The Process of Conducting Research
Key Steps in Research
The research process entails a systematic approach to inquiry, often iterative rather than strictly linear, to generate reliable knowledge from empirical evidence or logical deduction. Core steps, as delineated in scientific methodology, begin with identifying a clear research question grounded in observable phenomena or gaps in existing knowledge. This initial formulation ensures focus and testability, preventing vague pursuits that yield inconclusive results.[73][74] Subsequent steps involve conducting a thorough literature review to contextualize the question against prior findings, avoiding duplication and refining hypotheses based on established data. A hypothesis or testable prediction is then formulated, specifying expected causal relationships or outcomes. For empirical research, this leads to designing a methodology that controls variables, selects appropriate samples, and outlines data collection procedures to minimize bias.[75][48] Data collection follows, employing tools such as experiments, surveys, or observations calibrated for precision and replicability; for instance, in controlled experiments, randomization and blinding techniques are applied to isolate causal effects. Analysis then applies statistical or qualitative methods to interpret the data, assessing significance through metrics like p-values or effect sizes while accounting for potential confounders. Conclusions are drawn only if supported by the evidence, with limitations explicitly stated to facilitate future scrutiny.[73][76] Finally, results are disseminated via peer-reviewed publications or reports, enabling verification and building cumulative knowledge; this step underscores the self-correcting nature of research, where discrepancies prompt reevaluation of prior steps. Deviations from these steps, such as inadequate controls, have historically contributed to erroneous claims later retracted.[74][77]Research Methodologies
Research methodologies comprise the planned strategies for data collection, analysis, and interpretation to address research questions systematically.[3] They are broadly classified into quantitative, qualitative, and mixed methods, each suited to different investigative needs based on the nature of the data and objectives.[78] Quantitative methodologies emphasize numerical data and statistical analysis to measure variables, test hypotheses, and establish patterns or causal links with a focus on objectivity and generalizability.[79] Common techniques include experiments, surveys with closed-ended questions, and large-scale sampling, where researchers manipulate independent variables—such as in randomized controlled trials assigning participants randomly to treatment or control groups—to isolate effects while controlling confounders.[80][81] These approaches yield replicable results from sizable datasets, enabling precise predictions and broad inferences, though they risk oversimplifying complex human behaviors by prioritizing measurable outcomes over contextual depth.[82][83] Qualitative methodologies prioritize descriptive, non-numerical data to explore meanings, processes, and subjective experiences, employing methods like in-depth interviews, ethnographic observations, and thematic content analysis.[84] Case studies exemplify this by conducting intensive, multifaceted examinations of a single bounded case—such as an organization or event—to uncover intricate dynamics in real-world settings.[85][86] While offering rich, nuanced insights into "how" and "why" phenomena occur, qualitative methods are susceptible to interpretive bias, smaller sample limitations, and challenges in achieving statistical generalizability.[82][87] Mixed methods research integrates quantitative and qualitative elements within a single study to capitalize on their respective strengths, such as quantifying trends via surveys and elucidating mechanisms through follow-up interviews, thereby providing a more holistic validation of findings.[88] This convergence approach, as outlined in frameworks like sequential explanatory designs, mitigates individual method weaknesses but demands rigorous integration to avoid methodological conflicts.[89] Other specialized methodologies include correlational designs, which assess variable associations without manipulation to identify potential relationships for further testing, and longitudinal studies tracking changes over time to infer developmental or causal trajectories.[90] Method selection hinges on research goals, with quantitative favoring empirical precision for hypothesis-driven inquiries and qualitative enabling exploratory depth, while mixed methods suit multifaceted problems requiring both breadth and nuance.[91] Empirical rigor in application, including random sampling and validity checks, is essential to counter inherent limitations like selection bias or confounding variables across all types.[92]Tools and Technologies
Laboratory instruments form the backbone of empirical research in fields such as biology, chemistry, and materials science, enabling precise measurement and observation of physical phenomena. Common tools include microscopes for visualizing cellular structures, centrifuges for separating substances by density, and spectrophotometers for analyzing light absorption to determine concentrations.[93][94] Additional essential equipment encompasses pH meters for acidity measurements, autoclaves for sterilization, and chromatography systems for separating mixtures based on molecular properties.[95] These instruments rely on principles of physics and chemistry to generate reproducible data, though their accuracy depends on calibration and operator skill.[96] Computational tools have revolutionized data analysis across disciplines, allowing researchers to process large datasets efficiently. Programming languages like Python, with libraries such as NumPy for numerical computations and Pandas for data manipulation, are widely used for statistical modeling and machine learning applications.[97] R serves as a primary tool for statistical analysis and visualization, particularly in bioinformatics and social sciences, offering packages like ggplot2 for graphical representation.[98] Software such as MATLAB supports simulations and algorithm development in engineering and physics, while tools like Tableau and Power BI facilitate interactive data visualization without extensive coding.[99] Cloud-based platforms, including AWS and Google Cloud, enable scalable storage and high-performance computing for big data challenges.[100] Citation and reference management software streamlines literature review processes by organizing sources and generating bibliographies. Zotero, an open-source tool, collects and annotates references from web pages and databases, integrating with word processors for seamless insertion.[101] Electronic lab notebooks like LabArchives provide digital recording of experiments, enhancing reproducibility through version control and searchability.[102] Survey platforms such as Qualtrics support quantitative data collection via online questionnaires, with built-in analytics for preliminary processing.[102] As of 2025, artificial intelligence tools are increasingly integrated into research workflows for tasks like hypothesis generation, literature synthesis, and predictive modeling. Tools such as those leveraging large language models assist in summarizing papers and identifying patterns in datasets, though their outputs require validation to mitigate errors from training data biases.[103] In scientific domains, AI platforms for molecular modeling accelerate drug discovery by simulating protein interactions, with empirical studies showing productivity gains in targeted applications.[104] Despite enthusiasm, rigorous evaluation reveals that AI enhances efficiency in data-heavy fields but does not supplant causal reasoning or experimental design.[105]Ethics and Integrity in Research
Fundamental Ethical Principles
Fundamental ethical principles in research encompass standards designed to safeguard the integrity of scientific inquiry, protect participants and subjects, and ensure the reliability of knowledge production. These principles derive from historical precedents, including post-World War II responses to unethical experiments and domestic scandals like the Tuskegee syphilis study, which prompted formalized guidelines.[106] Core tenets emphasize honesty in data handling, accountability for outcomes, and fairness in resource allocation, countering incentives that might otherwise prioritize publication over truth.[107] A foundational framework is provided by the Belmont Report of 1979, which identifies three basic principles for research involving human subjects: respect for persons, beneficence, and justice. Respect for persons requires treating individuals as autonomous agents capable of informed consent and providing extra protections for those with diminished autonomy, such as children or the cognitively impaired.[108] Beneficence mandates maximizing benefits while minimizing harms, entailing systematic assessment of risks against potential gains and avoidance of unnecessary suffering.[106] Justice demands equitable distribution of research burdens and benefits, preventing exploitation of vulnerable groups and ensuring fair selection of participants.[106] Complementing these, the Singapore Statement on Research Integrity, issued in 2010 by the World Conference on Research Integrity, articulates four universal responsibilities: honesty, accountability, professional courtesy, and good stewardship. Honesty involves accurate reporting of methods, data, and findings without fabrication, falsification, or selective omission.[107] Accountability requires researchers to adhere to ethical norms, report errors, and accept responsibility for misconduct allegations. Professional courtesy promotes open sharing of data and ideas while respecting intellectual property and avoiding conflicts of interest. Good stewardship obliges efficient use of resources, mentoring of trainees, and dissemination of results to benefit society.[107] Additional principles include objectivity, which necessitates minimizing personal biases through rigorous methodology and peer scrutiny, and transparency, facilitating reproducibility by mandating detailed documentation of procedures and raw data.[109] The U.S. Office of Research Integrity defines misconduct narrowly as fabrication, falsification, or plagiarism, underscoring that ethical conduct extends beyond non-violation to proactive pursuit of rigor and fairness. Violations of these principles, often driven by publication pressures or funding dependencies, undermine public trust, as evidenced by retractions exceeding 10,000 annually in biomedical literature by the mid-2010s.[110] Adherence requires institutional mechanisms like institutional review boards, which independently evaluate protocols against these standards prior to initiation.[111]Research Misconduct and Fraud
Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results, committed intentionally, knowingly, or recklessly.[112][113] Fabrication involves making up data or results and recording or reporting them as if genuine, while falsification entails manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.[112] Plagiarism includes the appropriation of another person's ideas, processes, results, or words without giving appropriate credit.[112] These acts deviate from accepted practices and undermine the integrity of the scientific enterprise, though not all errors or questionable research practices qualify as misconduct.[114] Prevalence estimates for misconduct vary due to reliance on self-reports, which likely understate occurrences, and analyses of retractions, which capture only detected cases. Self-reported rates of fabrication, falsification, or plagiarism range from 2.9% to 4.5% across studies, with one international survey finding that one in twelve scientists admitted to such acts in the past three years.[115][116] Questionable research practices, such as selective reporting or failing to disclose conflicts, are more common, with up to 51% of researchers engaging in at least one.[117] Among retracted publications, misconduct accounts for the majority: a study of over 2,000 retractions found 67.4% attributable to fraud or suspected fraud (43.4%), duplicate publication (14.2%), or plagiarism (9.8%), far exceeding error-based retractions.[118] These figures suggest systemic under-detection, exacerbated by pressures in competitive fields like biomedicine.[115] Principal causes include the "publish or perish" culture, where career advancement hinges on publication volume and impact, incentivizing corner-cutting amid grant competition and tenure demands.[119][120] Lack of oversight in large labs, inadequate training, and rewards for novel findings over replication further contribute, as do personal factors like ambition or desperation under funding shortages.[119] In academia, where replication is undervalued and positive results prioritized, these incentives distort behavior, with fraud more likely in high-stakes environments despite institutional norms against it.[121] Notable cases illustrate impacts: In the Hwang Woo-suk scandal, the South Korean researcher fabricated stem cell data in 2004-2005 publications, leading to retractions in Science and global scrutiny of cloning claims.[122] Similarly, John Darsee's 1980s fabrications at Harvard and NIH involved inventing experiments across dozens of papers, resulting in over 100 retractions and a ten-year funding ban.[123] Such incidents, often in biomedicine, highlight how undetected fraud can propagate for years before whistleblowers or statistical anomalies trigger investigations.[122] Consequences encompass professional sanctions, including debarment from federal funding, institutional dismissal, and reputational harm, with eminent researchers facing steeper penalties than novices.[124] Retractions erode citations for affected work and linked studies, diminish journal impact factors, and foster public distrust in science, as seen in rising retraction rates from under 100 annually pre-2000 to thousands today.[125][126] Broader effects include wasted resources—billions in follow-on research—and policy missteps, such as delayed vaccine uptake from fraudulent autism-link claims.[118] Prevention efforts focus on training in responsible conduct, institutional policies for data management and authorship, and oversight by bodies like the U.S. Office of Research Integrity (ORI), which investigates allegations and enforces agreements for corrections or retractions.[127][128] Promoting transparency via open data repositories, preregistration of studies, and incentives for replication can mitigate pressures, though implementation varies, with training alone insufficient without cultural shifts away from publication quantity.[127][129] Whistleblower protections and rigorous peer review post-publication are also emphasized to detect issues early.[130]Institutional Review and Oversight
Institutional Review Boards (IRBs), also known internationally as Research Ethics Committees (RECs), serve as primary mechanisms for ethical oversight of research involving human subjects, reviewing protocols to ensure participant rights, welfare, and minimization of risks.[131] [132] Established in the United States following the 1974 National Research Act, which responded to ethical failures like the Tuskegee syphilis study, IRBs must evaluate studies for compliance with federal regulations such as the Common Rule (45 CFR 46), assessing informed consent, risk-benefit ratios, and equitable subject selection.[133] [134] Committees typically include at least five members with diverse expertise, including non-scientists and community representatives, to provide balanced scrutiny; reviews can be full board for higher-risk studies, expedited for minimal risk, or exempt for certain low-risk activities like educational surveys.[131] [132] For research involving animals, Institutional Animal Care and Use Committees (IACUCs) provide analogous oversight, mandated by the Animal Welfare Act of 1966 and Public Health Service Policy, conducting semiannual program reviews, inspecting facilities, and approving protocols to ensure humane treatment, the 3Rs principle (replacement, reduction, refinement), and veterinary care.[135] [136] IACUCs, composed of scientists, non-affiliated members, and veterinarians, evaluate alternatives to animal use and monitor ongoing compliance, with authority to suspend non-compliant activities.[137] Globally, similar bodies exist, such as under the European Union's Directive 2010/63/EU, though implementation varies by jurisdiction.[135] Broader institutional oversight addresses research integrity and misconduct through bodies like the U.S. Office of Research Integrity (ORI) within the Department of Health and Human Services, which investigates allegations of fabrication, falsification, or plagiarism in Public Health Service-funded research, imposes sanctions, and promotes education on responsible conduct.[138] [139] Institutions maintain their own research integrity offices to handle inquiries, often following federal guidelines that require prompt reporting and due process, with ORI overseeing findings since its establishment in 1993 to centralize responses to misconduct cases.[140] Critics argue that IRB processes impose excessive bureaucracy, causing delays—sometimes months for low-risk social science studies—and inconsistent decisions across institutions, potentially stifling legitimate research without commensurate improvements in participant protection.[134] [141] Overreach occurs when IRBs review non-research activities like journalism or quality improvement, expanding beyond regulatory intent, as evidenced by complaints from fields like oral history where federal exemptions are ignored.[142] [143] Empirical analyses indicate limited evidence that IRBs reduce harms effectively, with costs in time and resources diverting from core scientific aims, prompting calls for streamlined reviews or exemptions for minimal-risk work.[134] [141] In dual-use research with potential misuse risks, ethics committees' roles remain underdeveloped, highlighting gaps in proactive oversight.[144]Major Challenges and Systemic Issues
The Replication Crisis
The replication crisis denotes the systematic failure of numerous published scientific findings to reproduce in independent attempts, casting doubt on the reliability of empirical claims across multiple disciplines. This phenomenon emerged prominently in the early 2010s, particularly in psychology, where a large-scale effort by the Open Science Collaboration in 2015 attempted to replicate 100 studies published in top journals in 2008; only 36% yielded statistically significant results in the direction of the originals, with effect sizes approximately half as large as those initially reported.[145] Ninety-seven percent of the original studies had reported significant effects (p < 0.05), highlighting a stark discrepancy.[145] Similar issues have surfaced in other fields, though rates vary; for instance, a 2021 analysis found 61% replication success for 18 economics experiments and lower rates in cognitive psychology.[146] Replication failures extend beyond psychology to areas like biology and medicine, where preclinical cancer research has shown particularly low reproducibility; one pharmaceutical company's internal checks in 2011-2012 replicated only 11% of 53 high-profile studies.[52] In economics, community forecasts anticipate around 58% replication rates, higher than in psychology or education but still indicative of systemic unreliability.[147] Fields with stronger experimental controls, such as physics, exhibit fewer such problems due to larger-scale validations and less reliance on small-sample statistical inference, though even there, isolated high-profile disputes occur.[148] Overall, the crisis underscores that much of the published literature may overestimate effect sizes due to selective reporting, eroding the foundational assumption of cumulative scientific progress. Primary causes include publication bias, where journals preferentially accept novel, positive results while null or contradictory findings languish unpublished, inflating the apparent rate of "discoveries."[149] Questionable research practices exacerbate this: p-hacking involves flexibly analyzing data (e.g., excluding outliers or testing multiple outcomes) until a statistically significant result (p < 0.05) emerges by chance, while HARKing entails retrofitting hypotheses to fit observed data post-analysis.[150] Low statistical power from underpowered studies—often using small samples to detect implausibly large effects—further compounds the issue, as true effects require replication with adequate power to distinguish signal from noise.[151] These practices stem from academic incentives prioritizing quantity and novelty for tenure and funding over rigorous verification, with replication studies rarely published or funded.[10] The crisis has profound implications, including eroded public trust in science, misallocation of resources toward building on false premises, and slowed progress in applied domains like medicine, where non-replicable preclinical findings delay effective therapies.[152] It also reveals flaws in peer review, which often fails to detect inflated claims, and highlights how institutional pressures in academia—dominated by metrics like citation counts—favor sensationalism over truth-seeking.[153] In response, reforms emphasize transparency and rigor: pre-registration of hypotheses and analysis plans on platforms like OSF.io commits researchers before data collection, mitigating p-hacking and HARKing.[154] Open science initiatives promote sharing raw data, code, and materials, enabling independent verification, while calls for larger samples and Bayesian methods over rigid p-value thresholds aim to enhance power and inference.[10] Post-crisis, psychological studies show trends toward stronger effects, bigger samples, and fewer "barely significant" results, suggesting gradual improvement.[155] Dedicated replication journals and funding for verification efforts, alongside cultural shifts away from "publish or perish," represent ongoing efforts to realign incentives with reproducibility.[156]Biases in Research
Biases in research encompass systematic deviations from true effects, arising from cognitive, methodological, or institutional factors that skew study design, execution, or reporting.[157] These errors undermine the reliability of scientific claims, with empirical evidence showing their prevalence across disciplines, particularly in fields reliant on subjective interpretation like psychology and social sciences.[158] For instance, confirmation bias leads researchers to selectively seek or interpret data aligning with preconceptions, often embedded in experimental design through choice of hypotheses or data analysis paths that favor expected outcomes.[159] Observer bias further compounds this by influencing data collection based on researchers' expectations, as seen in studies where subjective assessments yield results correlated with the observer's prior beliefs rather than objective measures.[158] Methodological biases, such as selection and sampling bias, distort participant or data inclusion, producing non-representative results; for example, convenience sampling in clinical trials can overestimate treatment effects if healthier subjects are disproportionately included.[160] Publication bias exacerbates these issues by favoring studies with statistically significant or positive findings, with meta-analyses in psychology revealing that up to 73% of results lack strong evidence due to selective reporting, artificially inflating effect sizes in the literature.[161][162] In medicine, this manifests in overestimation of drug efficacy, as negative trials remain unpublished, distorting clinical guidelines.[163] Funding or sponsorship bias occurs when financial supporters influence outcomes to align with their interests, evident in industry-sponsored research where positive results for the sponsor's product appear 3-4 times more frequently than in independent studies.[164] Examples include pharmaceutical trials selectively reporting favorable data or nutritional studies funded by food industries downplaying risks of high-fructose corn syrup.[165][166] Ideological biases, particularly pronounced in academia, stem from the overrepresentation of left-leaning scholars—such as at Harvard where only 1% of faculty identify as conservative—leading to skewed research agendas that underexplore or dismiss hypotheses conflicting with progressive priors, like in social psychology where conservative viewpoints face hiring and publication barriers.[167][168] This systemic imbalance, with faculty political donations to Democrats outnumbering Republicans by ratios exceeding 10:1 in humanities and social sciences, fosters causal interpretations favoring environmental over genetic factors in behavior or policy outcomes that prioritize equity narratives over empirical trade-offs.[169] Mitigating biases requires preregistration of protocols, blinded analyses, and diverse research teams, though institutional incentives like tenure tied to publication volume perpetuate them; empirical audits, such as those revealing 50-90% exaggeration in effect sizes due to combined biases, underscore the need for skepticism toward uncorroborated claims from ideologically homogeneous fields.[170][171] Mainstream academic sources often understate ideological distortions, attributing discrepancies to "facts" rather than selection effects, yet surveys confirm self-censorship among dissenting researchers due to peer hostility.[172][173]Publication and Peer Review Flaws
Peer review serves as the primary mechanism for validating scientific manuscripts prior to publication, yet empirical evidence reveals systemic deficiencies that undermine its reliability as a quality filter. Studies demonstrate that peer review frequently fails to detect methodological errors or fraud, with experiments introducing deliberate flaws into submissions showing that reviewers miss most issues, as evidenced by a 1998 study where only a fraction of injected errors were identified.[174] The process is subjective and prone to inconsistencies, with little rigorous data confirming its efficacy in improving manuscript quality or advancing scientific truth.[175] Publication bias exacerbates these flaws by systematically favoring results with statistical significance or positive findings, distorting the scientific record and hindering meta-analyses. Defined as the selective dissemination of studies based on outcome direction or magnitude, this bias leads to overrepresentation of confirmatory evidence, as non-significant results face higher rejection rates from journals.[176] [171] Quantitative assessments indicate that this skew can inflate effect sizes in systematic reviews by up to 30% in fields like psychology and medicine, perpetuating erroneous conclusions until replication efforts reveal discrepancies.[177] Biases inherent in peer review further compromise objectivity, including institutional affiliation favoritism, where manuscripts from prestigious universities receive more lenient scrutiny, disadvantaging researchers from lesser-known institutions.[178] Ideological predispositions also influence evaluations, as shown in experiments where reviewers rated identical research on contentious topics like migration policy more favorably when aligned with prevailing academic paradigms, often reflecting left-leaning institutional norms that prioritize certain interpretive frameworks over empirical rigor.[179] Such biases, compounded by anonymity, enable ad hominem attacks or confirmation of entrenched views, as documented in analyses of review processes across disciplines.[180] The rise in retractions underscores peer review's inability to prevent flawed or fraudulent work from entering the literature, with biomedical retractions quadrupling from 2000 to 2021 and exceeding 10,000 globally in 2023 alone.[181] [182] Misconduct, including data fabrication, accounts for the majority of these withdrawals, with rates increasing tenfold since 1975, often undetected during initial review due to inadequate scrutiny of raw data or statistical practices.[118] This trend signals not only heightened vigilance via post-publication audits but also foundational weaknesses in pre-publication gatekeeping, where resource constraints and reviewer overload—exacerbated by unpaid labor—prioritize speed over thoroughness.[183] Additional operational flaws include protracted delays averaging 6-12 months per review cycle and high costs borne by journals without commensurate benefits, fostering predatory publishing alternatives that bypass rigorous checks.[175] These issues collectively erode trust in published research, prompting calls for reforms like open review or statistical auditing, though evidence of their superiority remains preliminary.[184]Funding and Incentive Distortions
Scientific research is heavily influenced by funding mechanisms that prioritize measurable outputs, such as publications and grants, over long-term reliability or exploratory work. The "publish or perish" paradigm, where career advancement depends on publication volume, incentivizes researchers to produce numerous papers rather than rigorous, replicable findings, contributing to increased retractions and lower overall research quality.[185][186] Hyper-competition for limited grants exacerbates this, with scientists spending substantial time on proposal writing—up to 40% of their effort—diverting resources from actual experimentation.[187] This structure favors incremental, citation-maximizing studies over novel or null-result research, leading to stagnation in groundbreaking discoveries.[188] Grant allocation processes introduce directional biases, steering research toward funder-preferred topics like high-impact or applied fields, while destabilizing foundational work through short-term funding cycles.[189] Industry sponsorship, a significant funding source, correlates with outcomes favoring sponsors' interests, such as selective reporting or design choices that inflate efficacy.[165][164] Government funding, which dominates public science, amplifies these issues; surveys indicate 34% of federally funded U.S. scientists have admitted to misconduct, including data manipulation, to align results with grant expectations.[190] Peer-reviewed grants often perpetuate conformity, as reviewers favor proposals mirroring established paradigms, suppressing disruptive ideas. These incentives directly fuel the replication crisis by devaluing verification studies, which offer few publications or grants compared to original "positive" findings.[191][192] Researchers face no systemic rewards for replication, despite evidence that up to 50% of studies in fields like psychology fail to reproduce, eroding trust in scientific claims.[193] Funder emphasis on novelty and societal impact further marginalizes replications, creating a feedback loop where unreliable results propagate.[194] Reforms, such as funding dedicated replication teams or rewarding quality metrics over quantity, have been proposed but face resistance due to entrenched career incentives.[10]Professionalization and Institutions
Training and Career Paths
Training for research careers typically begins with an undergraduate degree in a relevant discipline, followed by enrollment in a doctoral program. The PhD, as the cornerstone of advanced research training, emphasizes original investigation, data analysis, and scholarly communication, often spanning 5 to 11 years in total duration, inclusive of coursework, comprehensive examinations, and dissertation research.[195] In biomedical sciences, median time to degree ranges from 4.88 to 5.73 years across subfields.[196] Completion rates vary by discipline, with approximately 57% of candidates finishing within 10 years and 20% within 7 years, influenced by funding availability and program structure.[197] Postdoctoral fellowships commonly follow the PhD, providing 1 to 5 years of mentored research to build publication records, grant-writing skills, and independence required for permanent roles.[198] These positions, often temporary and funded by grants or institutions, function as an extended apprenticeship, though they increasingly serve as a holding pattern amid limited faculty openings.[199] In the United States, postdoctoral training hones not only technical expertise but also management and collaboration abilities essential for leading labs or teams.[200] Academic career progression traditionally involves securing a tenure-track assistant professorship after postdoc experience, followed by evaluation for tenure after 5 to 7 years based on research output, teaching, and service.[201] However, success rates remain low: fewer than 17% of new PhDs in science, engineering, and health-related fields obtain tenure-track positions within 3 years of graduation.[202] By 2017, only 23% of U.S. PhD holders in these areas occupied tenured or tenure-track academic roles, a decline from prior decades.[203] In computer science, the proportion advancing to tenured professorships stands at about 11.73%.[204] Engineering fields show similar constraints, with an average 12.4% likelihood of securing tenure-track jobs over recent years.[205] Beyond academia, PhD recipients pursue diverse paths in industry, government, and non-profits, leveraging analytical and problem-solving skills. Common roles include research scientists in private R&D, data scientists, policy analysts, and consultants, where private-sector employment now rivals academic hires in scale.[206][207] Medical science liaisons and environmental analysts represent specialized applications, often offering higher initial salaries than academic starts but less autonomy in pure research.[206] Systemic challenges arise from an oversupply of PhDs relative to academic positions, exacerbating competition and prolonging insecure postdoc phases that function as low-paid labor for grant-funded projects.[208][202] Universities sustain PhD production to meet teaching and research demands via graduate assistants, yet this model yields far more doctorates than faculty slots, with only 10-30% securing permanent academic roles depending on field.[209] This imbalance fosters career uncertainty, prompting calls for better preparation in non-academic skills and transparency about job prospects during training.[210][211]Academic and Research Institutions
Academic and research institutions, encompassing universities and specialized research centers, represent the institutional backbone of organized scientific inquiry, evolving from medieval teaching-focused universities to modern entities that integrate education, discovery, and application. The modern research university model originated in early 19th-century Prussia with Wilhelm von Humboldt's vision at the University of Berlin in 1810, emphasizing the unity of research and teaching to foster original knowledge production.[212] This paradigm spread globally, particularly influencing the United States, where Johns Hopkins University, founded in 1876, became the first explicitly research-oriented institution, prioritizing graduate training and specialized scholarship over undergraduate instruction alone.[213] By the late 19th century, American public universities adopted similar structures, expanding graduate programs and research facilities, which propelled advancements in fields like physics and biology.[214] In contemporary practice, these institutions conduct the majority of fundamental research, providing infrastructure such as laboratories, archives, and computational resources essential for empirical investigation and theoretical development. Universities train future researchers through doctoral programs, where students contribute to faculty-led projects, thereby perpetuating expertise while generating new data and publications.[215] They also oversee ethical compliance via institutional review boards, which evaluate study designs for risks to human and animal subjects, though implementation varies and can introduce bureaucratic delays. Beyond universities, dedicated research institutes like Germany's Max Planck Society or the United States' National Institutes of Health focus on targeted domains, often collaborating with academia to translate findings into practical outcomes.[216] However, systemic challenges undermine their efficacy, including heavy reliance on competitive grant funding, which favors incremental, grant-attractive projects over high-risk, foundational work. The tenure-track system, designed to safeguard intellectual independence, frequently incentivizes prolific but superficial output to meet promotion criteria, with post-tenure productivity sometimes declining as measured by publication rates.[217] Ideological homogeneity prevails, with approximately 60% of faculty in the humanities and social sciences identifying as liberal or far-left, correlating with reduced viewpoint diversity and potential suppression of heterodox inquiries, as evidenced by self-censorship surveys among academics.[169] This imbalance, more pronounced in elite institutions, can distort research priorities toward prevailing narratives, as seen in uneven scrutiny of politically sensitive topics.[218]Publishing and Dissemination
Scientific publishing primarily occurs through peer-reviewed journals, where researchers submit manuscripts detailing their findings, methodologies, and analyses for evaluation by independent experts before acceptance.[219] The process typically involves initial editorial screening, peer review for validity and novelty, revisions based on feedback, and final production including copy-editing and formatting.[220] In 2022, global output of science and engineering articles reached approximately 3.3 million, with China producing 898,949 and the United States 457,335, reflecting the scale and international distribution of dissemination efforts.[221] Preprints have emerged as a key mechanism for rapid dissemination, enabling authors to share unrefereed versions of their work on public servers such as arXiv for physics and mathematics or bioRxiv for biology, often months before formal publication.[222] This approach accelerates knowledge sharing, allows community feedback to refine research, and has gained prominence, particularly during the COVID-19 pandemic when preprints facilitated timely updates on evolving data.[223] However, preprints lack formal validation, prompting journals to increasingly integrate them into workflows by reviewing posted versions or encouraging prior deposition.[224] Open access (OA) models have transformed dissemination by removing paywalls, with gold OA—where articles are immediately freely available upon publication—rising from 14% of global outputs in 2014 to 40% in 2024.[225] This shift, driven by funder mandates and institutional policies, contrasts with subscription-based access, though it introduces article processing charges that can burden authors and strain society publishers' revenues amid rising costs.[226] Hybrid models and diamond OA (no-fee, community-supported) address some barriers, but predatory OA journals exploiting these trends underscore the need for rigorous vetting.[227] Conferences complement journal publication by providing platforms for oral presentations, posters, and networking, enabling real-time dissemination and critique of preliminary or complete findings.[228] Events organized by professional societies or field-specific bodies, such as those in health sciences or physics, foster collaboration and often lead to subsequent publications, though virtual formats have expanded access post-2020.[229] Beyond these, supplementary methods like data repositories, policy briefs, and targeted media outreach extend reach, prioritizing empirical validation over broad publicity.[230]Economics and Global Context
Research Funding Sources
Research funding derives primarily from four categories: government agencies, private industry, higher education institutions, and philanthropic foundations or nonprofits. Globally, total gross domestic expenditure on research and development (GERD) approached $3 trillion in 2023, with the United States and China accounting for nearly half of this total through combined public and private investments.[231] [232] In high-income economies, business enterprises typically fund 60-70% of overall R&D, emphasizing applied and development-oriented work, while governments allocate a larger share—often over 40%—to basic research.[233] [65] Government funding constitutes the backbone of basic and public-good research, channeled through national agencies and supranational programs. In the United States, federal obligations for R&D totaled $201.9 billion in the proposed fiscal year 2025 budget, with key performers including the National Institutes of Health (NIH), which supports biomedical research; the National Science Foundation (NSF), focusing on foundational science; and the Department of Energy (DOE), advancing energy and physical sciences.[234] These agencies funded 40% of U.S. basic research in 2022, prioritizing investigator-initiated grants amid competitive peer review processes.[65] In the European Union, the Horizon Europe program disburses billions annually for collaborative projects across member states, with the European Commission awarding over 2,490 grants in recent cycles, often targeting strategic areas like climate and digital innovation.[235] China, investing heavily in state-directed R&D, channels funds through ministries and programs like the National Natural Science Foundation of China, supporting rapid scaling in fields such as artificial intelligence and quantum technologies, with public expenditures exceeding those of the U.S. in higher education and government labs by 2023.[236] [237] Private industry provides the largest volume of funding in market-driven economies, directing resources toward commercially viable innovations. In the U.S., businesses financed 69.6% of GERD in recent years, performing $602 billion in R&D in 2021 alone, predominantly in sectors like pharmaceuticals, technology, and manufacturing where intellectual property yields direct returns.[233] [238] This sector contributed 37% of basic research funding in 2022, often through corporate labs or partnerships with academia, though priorities align with profit motives rather than pure knowledge advancement.[65] Globally, industry R&D intensity—measured as expenditure relative to GDP—reaches 2-3% in OECD countries, with firms like those in semiconductors and biotech recouping investments via patents and market dominance.[239] Higher education institutions and philanthropic entities supplement these sources with intramural funds and targeted grants. U.S. universities expended $59.6 billion in federal-supported R&D in fiscal year 2023, but also drew 5% from state/local governments and internal revenues, enabling flexibility in exploratory work.[240] Private foundations account for about 6% of academic R&D, with examples including the Bill & Melinda Gates Foundation funding global health initiatives and the Burroughs Wellcome Fund supporting biomedical training, typically awarding grants from $15,000 to over $500,000 per project.[241] [242] These sources, while smaller in scale, often fill gaps in high-risk or interdisciplinary areas overlooked by larger funders.[243]International Variations and Statistics
Global research and development (R&D) expenditures exhibit stark international disparities, with advanced economies dominating total spending while select nations prioritize intensity relative to GDP. In 2023, OECD countries collectively allocated approximately 2.7% of GDP to R&D, totaling around $1.9 trillion, though non-OECD performers like China contribute substantially to aggregate figures.[239] The United States led in absolute R&D outlays at over $700 billion in 2022, followed closely by China, which surpassed $500 billion amid rapid state-driven expansion.[244] In terms of R&D intensity, Israel invested 5.56% of GDP in 2022, South Korea 4.93%, and Belgium 3.47%, contrasting with lower shares in emerging markets like India (0.64%) and Brazil (1.15%).[245] These variations reflect differing economic structures, policy emphases, and institutional capacities, where high-intensity nations often feature concentrated business-sector investments.[246] Scientific publication output further highlights quantity-driven divergences, particularly Asia's ascent. In 2023, China produced over 1 million science and engineering articles, accounting for about 30% of global totals exceeding 3 million, while the United States output around 500,000.[247] India and Germany followed with over 100,000 each, underscoring a shift from Western dominance; China's volume has grown via incentives like publication quotas, though this correlates with proliferation in lower-tier journals.[247] High-quality output, per Nature Index metrics tracking contributions to elite journals, saw China edging the U.S. in share for 2023-2024, yet U.S. publications maintain superior average citation rates, with 20-30% higher impact in fields like biomedicine.[248][249]| Country | R&D as % GDP (2022) | Total Publications (2023) | Avg. Citations per Paper (est. recent) |
|---|---|---|---|
| United States | 3.46 | ~500,000 | High (leads globally) [249][245][247] |
| China | 2.40 | >1,000,000 | Moderate (quantity bias) [245][247] |
| South Korea | 4.93 | ~80,000 | Above average [245][247] |
| Germany | 3.13 | ~110,000 | High [245][247] |
| Japan | 3.30 | ~70,000 | High [245][247] |
