Hubbry Logo
Operations researchOperations researchMain
Open search
Operations research
Community hub
Operations research
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Operations research
Operations research
from Wikipedia

Operations research (British English: operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve management and decision-making.[1][2] The term management science is occasionally used as a synonym.[3]

Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.[4]

Overview

[edit]

Operations research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process.[5] Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem).

The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research[6] and The Journal of the Operational Research Society [7] are:

History

[edit]

In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research.[4]

Historical origins

[edit]

In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead.[8] Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge.[9] Beginning in the 20th century, study of inventory management could be considered[by whom?] the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may[original research?] have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.[10]

Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt.[11] Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.[12]

Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules.

Second World War

[edit]

The modern field of operational research arose during World War II.[dubiousdiscuss] In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control".[13] Other names for it included operational analysis (UK Ministry of Defence from 1962)[14] and quantitative management.[15]

During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army.[16]

Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941.[17]

A Liberator in standard RAF green/dark earth/black night bomber finish as originally used by Coastal Command

In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty.[18] Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel Prize winners and many other people who went on to be pre-eminent in their fields.[19][20] They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.[21]

While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings.[22] As a result of these findings Coastal Command changed their aircraft to using white undersurfaces.

Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".[23]

Map of Kammhuber Line

Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command.[citation needed] For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft.[24] This story has been disputed,[25] with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University,[26] the result of work done by Abraham Wald.[27]

When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses.[28]

The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.[29]

Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction.[29]

On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting.

After World War II

[edit]

In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR:

"To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective."[11]

With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947.[30]

In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953.[31] Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members.[citation needed] Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.[citation needed]

In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative[citation needed] OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian[citation needed]), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s—the one in 1956 with 120 participants—bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966.[citation needed]

With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently."[30] Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.[32]

Problems addressed

[edit]

Operational research is also used extensively in government where evidence-based policy is used.

Management science

[edit]

The field of management science (MS) is known as using operations research models in business.[35] Stafford Beer characterized this in 1967.[36] Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.

The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.

Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.[37]

[edit]

Some of the fields that have considerable overlap with Operations Research and Management Science include:[38]

Applications

[edit]

Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes:[37]

  • Scheduling (of airlines, trains, buses etc.)
  • Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities)
  • Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station)
  • Hydraulics & Piping Engineering (managing flow of water from reservoirs)
  • Health Services (information and supply chain management)
  • Game Theory (identifying, understanding; developing strategies adopted by companies)
  • Urban Design
  • Computer Network Engineering (packet routing; timing; analysis)
  • Telecom & Data Communication Engineering (packet routing; timing; analysis)

[39]

Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods. In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years[vague], a number of non-quantified modeling methods have been developed. These include:[citation needed]

Societies and journals

[edit]

Societies

[edit]

The International Federation of Operational Research Societies (IFORS)[40] is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US,[41] UK,[42] France,[43] Germany, Italy,[44] Canada,[45] Australia,[46] New Zealand,[47] Philippines,[48] India,[49] Japan and South Africa.[50] For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957.[51] The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO).[52] Other important operational research organizations are Simulation Interoperability Standards Organization (SISO)[53] and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)[54]

In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better[55] which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR.[56]

Journals of INFORMS

[edit]

The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports.[57] They are:

Other journals

[edit]

These are listed in alphabetical order of their titles.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Operations research (OR), also known as operational research, is an interdisciplinary scientific discipline that applies advanced analytical methods—including mathematical modeling, optimization, , and simulation—to help organizations make better decisions and efficiently manage complex systems. It focuses on developing quantitative techniques to solve practical problems involving , process improvement, and across diverse sectors. The origins of operations research trace back to World War II, when British scientists first coined the term in 1940 to describe systematic studies aimed at improving military operations, particularly the integration of technology into air defense systems. During the war, teams of mathematicians, physicists, and engineers applied scientific methods to optimize convoy routing, bombing strategies, and , leading to significant efficiency gains. Post-war, OR expanded into civilian applications, with the establishment of the Operations Research Society of America (now part of INFORMS) in 1952 to promote the field in industry and government. By the 1950s and 1960s, advancements in computing enabled more sophisticated models, transforming OR into a cornerstone of . Key methods in operations research include for optimizing linear objective functions subject to constraints, dynamic programming for sequential decision-making under uncertainty, and to test system behaviors in environments. Other prominent techniques encompass network analysis for flow problems, queuing theory for service systems, and for competitive scenarios, often integrated with data analytics and in modern applications. These methods emphasize a structured approach: problem formulation, model building, solution derivation, and validation through real-world testing. Operations research finds wide applications in and , where it optimizes levels and transportation routes to reduce costs; in healthcare, for scheduling staff and allocating resources to improve patient outcomes; and in finance, for and . In transportation, OR models enhance scheduling and urban , while in , it supports and . The field continues to evolve with and AI, addressing contemporary challenges like systems and response.

Introduction

Definition and Scope

Operations research (OR) is an interdisciplinary field that applies advanced analytical methods, including mathematical modeling, statistics, and algorithms, to enhance and optimize the performance of complex systems. It draws from disciplines such as , , and to address managerial and operational challenges in both public and private sectors. At its core, OR employs a scientific approach to transform into actionable insights, focusing on quantitative analysis to improve efficiency and effectiveness. The core principles of OR revolve around a structured problem-solving process that emphasizes rigor and validation. This process typically includes problem formulation to clearly define objectives and constraints, to represent the system mathematically or statistically, solution derivation using analytical or computational techniques, and validation through testing and to ensure robustness. is integral throughout, providing the empirical foundation for accurate modeling and informed adjustments. This systematic methodology distinguishes OR as a prescriptive , aiming not only to diagnose issues but also to recommend optimal courses of action. The scope of OR encompasses both deterministic problems, where outcomes are predictable under fixed conditions, and stochastic problems, which account for and variability. It addresses key areas such as to maximize efficiency with limited inputs, scheduling to coordinate activities and minimize delays, to balance , and system design to enhance overall performance. These applications span short-term operational decisions and long-term , always prioritizing quantifiable improvements in system outcomes. OR differs from related fields like and optimization in its holistic integration of methods. While broadly encompasses descriptive (what happened) and predictive (what might happen) analyses, OR emphasizes to determine the best actions, often through optimization as one of its primary tools but extending to and other techniques for broader system analysis. Optimization, though central to OR, refers specifically to techniques for finding maxima or minima in models, whereas OR represents a comprehensive that incorporates optimization within a full scientific framework.

Importance and Impact

Operations research (OR) has delivered substantial economic benefits across industries by optimizing supply chains and , resulting in billions of dollars in savings globally. For instance, finalist projects for the INFORMS Franz Edelman Award, which recognize high-impact OR applications, have collectively generated over $431 billion in quantifiable benefits since the award's inception in 1972 (as of 2025), including efficiencies in and that reduce operational costs by streamlining and transportation. In 2025, was awarded for using OR in athlete selection and training strategies that contributed to Olympic gold medals. In the energy sector, OR models implemented by the (MISO) achieved savings of $2.1 to $3.0 billion from 2007 to 2010 through improved reliability and transmission planning. These optimizations extend to global challenges, where OR supports climate modeling by enhancing integration and resource forecasting, and aids response through and , as demonstrated in over 23 studies during that improved vaccine distribution and healthcare . On the societal front, OR enhances public services by reducing waste and improving efficiency in areas like transportation and healthcare delivery. Applications in have minimized congestion and emissions, contributing to more equitable access to services, while in , organizations like the have used OR to distribute 4.2 million metric tons of and $2.1 billion in cash transfers to 97.1 million beneficiaries in 2019, amplifying impact on . Furthermore, OR advances (SDGs) by integrating environmental, social, and economic dimensions into decision-making, such as optimizing supply chains for circular economies and reducing carbon footprints in manufacturing, thereby supporting global efforts to achieve SDGs like responsible consumption and . The impact of OR has evolved from providing tactical military advantages during , such as radar optimization and convoy routing, to becoming a strategic tool in and , where it drives measurable returns on (ROI). Post-war, OR transitioned to civilian applications, with industries adopting techniques for and production scheduling. In the U.S. , OR has generated savings in the hundreds of millions to billions through and medical , evolving into big data-driven analytics for broader economic and benefits. Despite these gains, challenges in OR adoption persist, including issues that undermine model accuracy, such as inconsistent or incomplete datasets in real-world settings, and resistance to quantitative methods due to organizational and lack of technical expertise. These barriers can limit scalability, particularly in sectors reliant on qualitative factors, though addressing them through better and training enhances overall effectiveness.

History

Origins and Early Developments

The origins of operations research can be traced to philosophical and practical efforts in efficiency and management during the late 19th and early 20th centuries, particularly through the lens of pioneered by . Taylor, an American mechanical engineer, developed his principles in the 1880s and 1890s while working in U.S. manufacturing industries, emphasizing the systematic study of tasks to eliminate inefficiency. His time-motion studies involved breaking down work processes into elemental components, measuring them precisely, and reorganizing them to maximize productivity, as detailed in his 1911 book . This approach influenced industrial practices by promoting data-driven decision-making over rule-of-thumb methods, laying a foundational for analyzing complex systems that later informed operations research. Early mathematical foundations for operations research emerged from economic theory and engineering innovations in the same period. French economist contributed significantly in the 1870s with his , which modeled markets as interconnected systems where balance across multiple commodities through a set of simultaneous equations. This work, outlined in Éléments d'économie politique pure (1874), introduced concepts for achieving equilibrium, serving as a precursor to the modeling techniques used in operations research for . Complementing this, American industrialist applied efficiency principles in the early 1900s, most notably with the introduction of the moving at his Highland Park plant in 1913, which reduced Model T production time from over 12 hours to about 90 minutes by standardizing tasks and minimizing worker movement. Ford's methods, inspired by Taylorism, demonstrated practical optimization in , influencing the quantitative analysis of workflows. Key figure A.P. Rowe, an aeronautical engineer and superintendent of the Bawdsey Research Station from 1936, played a pivotal role in institutionalizing these ideas by forming interdisciplinary teams to evaluate system performance quantitatively. Rowe coined the term "operational " in 1940 to describe these interdisciplinary efforts integrating science into operational . By the late , these efforts focused on military contexts amid rising geopolitical tensions, particularly in addressing deployment and protection challenges. At Bawdsey, initiated the first operational research studies in , assigning scientists like E.C. Williams and G.A. Roberts to assess 's effectiveness in air defense, optimizing detection ranges and response times through empirical data collection and modeling. Similar analyses began exploring routing to mitigate threats in the Atlantic, using probabilistic methods to evaluate escort allocations and formation sizes based on simulated scenarios. These efforts formalized operations research as a discipline for integrating into operational .

World War II Contributions

During , operations research emerged as a critical discipline through the efforts of interdisciplinary teams applying scientific methods to military challenges, particularly in the . In March 1941, British physicist formed a small group of scientists at , informally known as "Blackett's Circus," to analyze and optimize resource use against German U-boats. This team, consisting of physicists, physiologists, and engineers, pioneered systematic data collection and analysis to evaluate operational effectiveness, setting a model for future OR units across the Allies. The group addressed key problems such as optimizing routes and sizes, radar deployment for patrols, and bombing strategies for anti-submarine attacks. Analysis revealed that the rate of merchant ship sinkings per attack was independent of size, leading to recommendations for larger convoys that reduced the number of required escorts and overall shipping losses; by , this shift contributed to greatly decreased losses in the Atlantic convoys. For radar and search operations, Blackett's team optimized patrol patterns and recommended painting white to reduce visibility, which cut the average detection distance by 20% and doubled sightings during patrols. In bombing tactics, their studies showed that firing on approaching during level bombing runs reduced the probability of ships being sunk from 25% to 10%, influencing defensive procedures. Methodologically, Blackett's Circus introduced rigorous data-driven modeling and empirical testing, emphasizing probabilistic analysis over intuition to quantify uncertainties in search and engagement scenarios; this approach transformed ad hoc into evidence-based strategies. Blackett's in these efforts earned him the U.S. in 1946 for contributions to the , complementing his 1948 for unrelated work. The success of British OR prompted its adoption by the United States, where the Anti-Submarine Warfare Operations Research Group (ASWORG) was established in April 1942 under Philip Morse to support the U.S. Navy's Tenth Fleet. ASWORG collaborated with British teams on and , developing optimal escort screening plans and convoy configurations that further minimized vulnerabilities to attacks. OR practices spread to the U.S. Army, where dedicated groups analyzed efficiencies and troop movements, enhancing overall Allied logistical capabilities.

Post-War Expansion

Following , operations research transitioned from military applications to civilian sectors as demobilized personnel and institutions adapted wartime methodologies to industrial and business problems in the late and . Practitioners who had honed analytical techniques during the war returned to academia, government, and private industry, applying quantitative methods to optimize production, , and in growing economies. The , established as an independent nonprofit in 1948 from the earlier Project RAND, played a pivotal role in this transfer, initially focusing on defense-related research while facilitating the dissemination of operations research tools to commercial contexts, such as and . Key institutional milestones marked the formalization of operations research during this period. In 1952, the Operations Research Society of America (ORSA) was founded to promote the discipline among professionals in the United States, providing a platform for collaboration between military, academic, and industrial experts. That same year, the first dedicated journal, Operations Research (initially the Journal of the Operations Research Society of America), began publication, enabling the sharing of seminal advancements and case studies. These developments solidified operations research as a distinct field, bridging wartime innovations with peacetime applications. The discipline expanded globally in the post-war era, with societies forming across and to adapt operations research to local industrial needs. In the , the Operational Research Society was established in 1948, building on wartime efforts to support nationalized industries like transportation and energy. In , the Operations Research Society of Japan was founded in 1957, aiding post-war reconstruction through applications in and efficiency improvements amid rapid economic recovery. The 1950s also saw the widespread adoption of , pioneered by in the late 1940s, which became a cornerstone for solving complex optimization problems in industry worldwide, influencing sectors from oil refining to agriculture. Cold War tensions sustained military funding for operations research, particularly through institutions like RAND, which drove innovations in and during the 1950s and 1960s. Substantial U.S. defense investments supported theoretical work on strategic decision-making, such as John von Neumann's contributions to for conflict modeling, while integrated interdisciplinary approaches to evaluate policy options in nuclear deterrence and . This era's advancements not only bolstered but also enriched civilian methodologies by refining tools for uncertainty and .

Methodologies and Techniques

In quantitative analysis, particularly in operations research and management science, problem-solving approaches include trial and error (informal guessing and testing), complete enumeration (brute-force checking of all possibilities), using an algorithm (systematic, guaranteed solution procedure), and trying various approaches (heuristic methods exploring multiple strategies for good-enough solutions in complex problems).

Optimization Methods

Optimization methods constitute a of operations research, focusing on deterministic techniques to identify optimal solutions for decision-making problems involving , , and scheduling. These methods assume known parameters and seek exact solutions within defined constraints, contrasting with probabilistic approaches. Central to this domain are formulations that model objectives and restrictions mathematically, enabling systematic solution procedures. Key techniques include linear, , nonlinear, dynamic, and multi-objective programming, each addressing specific problem structures prevalent in operational contexts. Linear programming (LP) models problems where both the objective function and constraints are linear, providing a framework for optimizing outcomes such as or cost minimization subject to limited resources. The standard form of an LP problem is formulated as: max cx\max \ \mathbf{c}^\top \mathbf{x} subject to Axb, x0,\text{subject to} \ A\mathbf{x} \leq \mathbf{b}, \ \mathbf{x} \geq \mathbf{0}, where xRn\mathbf{x} \in \mathbb{R}^n represents the decision variables, cRn\mathbf{c} \in \mathbb{R}^n the objective coefficients, ARm×nA \in \mathbb{R}^{m \times n} the constraint matrix, and bRm\mathbf{b} \in \mathbb{R}^m the resource bounds. This formulation assumes non-negativity for simplicity, though extensions handle equalities and free variables via slack or surplus variables. LP problems are geometrically interpreted as finding the optimal vertex of a convex defined by the constraints, ensuring a unique optimum exists under feasibility and boundedness. The , devised by , efficiently solves LP problems by moving along the edges of the from one to an adjacent, improved one. The process begins by initializing a , often using artificial variables for Phase I to find feasibility. In the main Phase II, an entering variable is selected as the non-basic variable with the most negative (indicating potential improvement), while the leaving variable is chosen via the minimum on the updated constraints to maintain feasibility. Pivoting updates the basis by swapping variables, recomputing the tableau until all reduced costs are non-negative, signaling optimality. To prevent cycling—revisiting bases indefinitely—pivot rules like Bland's rule select the lowest-index eligible variable for entering or leaving. The algorithm's polynomial-time variants, such as those using interior-point methods, complement the simplex for large-scale problems, but the tableau-based approach remains foundational for its intuitive edge-following. Integer programming extends LP by requiring some or all variables to take integer values, addressing discrete decisions like selecting whole units in inventory or facility location. The branch-and-bound method, pioneered by and Doig, solves these by relaxing integrality to obtain LP bounds and systematically partitioning the solution space. It constructs a tree where each node represents a subproblem with added constraints (e.g., fixing a fractional variable to or values); from LP relaxations and incumbent integer solutions prune infeasible or suboptimal branches, ensuring enumeration terminates at the global optimum. For pure integer linear programs, strong formulations like Gomory cuts enhance bounding. A classic example is the 0-1 , which maximizes value vixi\sum v_i x_i subject to wixiW\sum w_i x_i \leq W and xi{0,1}x_i \in \{0,1\}, modeling cargo loading or project selection; branch-and-bound efficiently explores subsets while bounding via LP relaxation discards unpromising paths. Nonlinear programming (NLP) handles cases where the objective or constraints involve nonlinear functions, common in modeling or designs. serves as a foundational method for unconstrained or constrained NLPs, iteratively updating the solution as xk+1=xkαkf(xk)\mathbf{x}_{k+1} = \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k), where αk\alpha_k is a step size chosen via exact or approximate to ensure descent, and f\nabla f is the gradient. For constrained problems, projected gradient or barrier methods adapt this by incorporating feasibility. Convergence relies on convexity for global optima, though local minima suffice for many applications; second-order methods like Newton's accelerate but require Hessian computation. In operations research, NLPs optimize nonlinear costs in supply chains or , often solved via that linearizes constraints around iterates. Dynamic programming (DP) addresses sequential decision problems by decomposing them into , leveraging for efficiency. Bellman's principle of optimality states that an optimal has the property that, regardless of the initial state and decision, the remaining decisions form an optimal subpolicy for the resulting state. This enables backward or forward , typically via the value function Vn(s)V_n(s) representing the maximum value from stage nn in state ss. The recursive formulation for finite-horizon problems is: Vn(s)=maxa{r(s,a)+γVn1(f(s,a))},V_n(s) = \max_a \left\{ r(s,a) + \gamma V_{n-1}(f(s,a)) \right\}, where r(s,a)r(s,a) is the immediate reward for action aa in state ss, f(s,a)f(s,a) the next state transition, and γ[0,1)\gamma \in [0,1) a discount factor (or γ=1\gamma=1 for undiscounted). Initialization sets V0(s)=0V_0(s) = 0, and policies derive from argmax choices. DP excels in staging problems like inventory control or resource allocation over time, avoiding exponential enumeration through memoization. In sequencing applications, such as job shop scheduling, DP minimizes total completion time by optimally ordering jobs on machines, computing costs stage-by-stage from the final machine backward. Multi-objective optimization arises when multiple, often conflicting, criteria must be balanced, such as cost versus environmental impact in design. Solutions lie on the , the set of non-dominated points where no objective improves without worsening another; introduced this concept in analysis. Generating the front involves solving scalarized problems, like the weighted sum method, which combines objectives as maxwifi(x)\max \sum w_i f_i(\mathbf{x}) with wi0w_i \geq 0, wi=1\sum w_i = 1, varying weights to trace efficient solutions. This linear scalarization suffices for convex problems but requires nonlinear variants like ϵ\epsilon-constraint for non-convexity, ensuring comprehensive coverage of s. In operations research, these methods support in , prioritizing solutions via post-optimization tools like trade-off curves.

Stochastic and Simulation Techniques

Stochastic and techniques form a of operations research for modeling systems under , where in inputs, processes, or outcomes precludes deterministic . These approaches employ to quantify risks, predict behaviors, and support in dynamic environments like supply chains, , and service operations. By incorporating stochastic elements such as random arrivals or variable processing times, they enable the evaluation of long-term performance metrics, including expected costs, delays, and resource utilization, often through analytical models or computational approximations. Queueing theory provides analytical frameworks for systems where entities arrive randomly and await service, capturing congestion and efficiency under probabilistic demands. A standardized description uses A/B/s, where A specifies the interarrival time distribution, B the service time distribution, and s the number of parallel servers; this notation, proposed by D.G. Kendall in , facilitates concise model specification and comparison across diverse applications. The M/M/1 queue exemplifies a basic yet influential model, assuming Poisson-distributed arrivals at rate λ and exponentially distributed service times at rate μ with one server. For stability, the traffic intensity ρ = λ/μ must be less than 1, yielding the steady-state probability of n customers as
Pn=(1ρ)ρn,P_n = (1 - \rho) \rho^n,
from which average queue length and waiting time follow via the birth-death process. This formulation originated in A.K. Erlang's 1909 analysis of traffic congestion, laying groundwork for modern teletraffic engineering. complements these models by relating system-wide averages: the long-run average number of items L equals the arrival rate λ times the average time per item W, or L = λW, holding for stable systems with general arrival and service processes under mild conditions. Proven rigorously by John D.C. Little in , this theorem underpins performance evaluation across queueing networks without requiring detailed distributional assumptions.
Markov decision processes (MDPs) extend modeling to problems, framing decisions in environments with probabilistic state transitions and rewards. An MDP is defined by a state space S, action space A, transition probabilities P(s'|s, a) governing the probability of moving to state s' from s under action a, and reward function R(s, a); the ensures future states depend only on the current state and action. Introduced by Richard Bellman in 1957, MDPs formalize dynamic programming for uncertain sequential choices, such as inventory management or under variable demands. The value iteration algorithm solves discounted infinite-horizon MDPs by iteratively computing the value function V(s), which represents the maximum expected discounted reward starting from state s. The update rule is
Vk+1(s)=maxa[R(s,a)+γsP(ss,a)Vk(s)],V_{k+1}(s) = \max_a \left[ R(s, a) + \gamma \sum_{s'} P(s'|s, a) V_k(s') \right],
where γ ∈ (0,1) is the discount factor; under properties, it converges to the optimal V^*(s) = max_π E[∑_{t=0}^∞ γ^t r_t | s_0 = s, π], enabling extraction via argmax actions. This method, inherent to Bellman's dynamic programming framework, balances of state-action spaces efficiently for moderate-sized problems.
Monte Carlo simulation approximates solutions to intractable stochastic systems by generating numerous random realizations of the process and averaging outcomes, offering flexibility for complex, non-Markovian scenarios. Pioneered by and Stanislaw Ulam in , the technique leverages random sampling to estimate integrals or expectations, such as system throughput, by simulating paths from probabilistic models and computing empirical means; its efficacy stems from the , ensuring convergence to true values as sample size grows. In operations research, it assesses designs like production lines or networks where analytical tractability fails, often integrated with optimization for parameter tuning. To enhance efficiency, methods like antithetic variates mitigate the high computational cost of naive sampling by introducing controlled correlations. Developed by J.M. Hammersley and K.W. Morton in , the approach generates pairs of simulations from oppositely transformed random variables—e.g., uniform draws U and 1-U—to induce negative dependence, yielding an unbiased with lower variance: if θ̂_1 and θ̂_2 are paired estimates, then (θ̂_1 + θ̂_2)/2 has Var[(θ̂_1 + θ̂_2)/2] = [Var(θ̂_1) + Var(θ̂_2) - 2Cov(θ̂_1, θ̂_2)] / 4, reduced when Cov < 0. This technique proves particularly effective for smooth functions in financial risk assessment or queueing simulations. Reliability analysis quantifies the dependability of systems against failures, focusing on survival probabilities and operational uptime amid stochastic breakdowns. The exponential distribution models constant failure rates λ, with reliability function R(t) = e^{-λt} implying memoryless property—conditional failure risk remains λ regardless of age—suitable for non-aging components like certain electronics or software faults. Seminal treatments in Richard E. Barlow and Frank Proschan's 1965 monograph establish probabilistic foundations for coherent systems, where component failures propagate via structure functions. For repairable systems with exponential failure and repair times (rates λ and μ, respectively), steady-state availability—the long-run proportion of time the system operates—is given by
A=μλ+μ=MTTFMTTF+MTTR,A = \frac{\mu}{\lambda + \mu} = \frac{\text{MTTF}}{\text{MTTF} + \text{MTTR}},
where MTTF = 1/λ and MTTR = 1/μ; this formula, derived from alternating renewal theory, guides redundancy and maintenance policies in critical infrastructure like power grids. Queueing models from these techniques further inform healthcare applications, such as predicting patient wait times in emergency departments to improve resource allocation.

Network and Decision Analysis

Network flows represent a fundamental class of problems in operations research, modeling the maximum rate at which material, data, or value can be sent from a source to a sink in a capacitated graph. The establishes that the maximum flow value in a network equals the minimum capacity of a cut separating the source from the sink, providing a duality result that bounds achievable flows. This theorem, proved by Ford and Fulkerson, relies on capacity constraints where each edge has a non-negative capacity limiting the flow through it, ensuring conservation of flow at intermediate nodes. The Ford-Fulkerson algorithm computes the maximum flow by iteratively finding augmenting paths from source to sink in the residual graph and augmenting the flow along these paths until no such path exists. An augmenting path is a path in the residual network where forward edges have residual capacity greater than zero, and backward edges allow flow reduction to reroute excess. Capacity constraints are enforced by updating residual capacities after each augmentation, subtracting the flow from forward residuals and adding it to backward ones. This method integrates with linear programming formulations for network optimization, enabling scalable solutions in large-scale systems. Shortest path algorithms address finding the minimum-weight path between nodes in a graph, crucial for routing and scheduling in networks. efficiently solves this for non-negative edge weights using a priority queue to select the next node with the smallest tentative distance. It maintains a distance array initialized to infinity except for the source at zero, relaxing edges from the current node and updating distances if a shorter path is found, with the priority queue ensuring greedy selection of the closest unvisited node. The algorithm's time complexity is O((V+E)logV)O((V + E) \log V) with a binary heap priority queue, making it practical for sparse graphs. Minimum spanning trees (MSTs) connect all vertices in an undirected graph with minimum total edge weight, avoiding cycles. Kruskal's algorithm achieves this by sorting edges in non-decreasing weight order and greedily adding the next edge if it connects disjoint components, using a union-find structure to track connectivity. It proves optimal by showing that any MST can be transformed into the one produced without increasing cost, leveraging the cut property where the lightest edge across a cut is in some MST. This approach runs in O(ElogE)O(E \log E) time, suitable for dense graphs up to thousands of vertices. Decision analysis provides structured frameworks for evaluating choices under uncertainty, incorporating probabilities and utilities to guide rational decisions. Decision trees model sequential decisions and chance events as a branching diagram, where branches represent alternatives and outcomes, allowing backward induction to compute expected values at each node. Expected utility extends this by assigning utilities to outcomes rather than monetary values, maximizing the expected utility under von Neumann-Morgenstern axioms of rationality, which assume completeness, transitivity, continuity, and independence. Sensitivity analysis in decision trees assesses how changes in probabilities or utilities affect the optimal choice, revealing robustness by varying inputs and observing shifts in expected utility rankings. The analytic hierarchy process (AHP) supports multi-criteria decision making by decomposing complex problems into hierarchies of criteria, subcriteria, and alternatives, using pairwise comparisons to derive priority weights. Experts compare elements on a 1-9 scale of relative importance, forming a reciprocal matrix whose principal eigenvector yields normalized weights, with consistency checked via the consistency ratio. For alternatives, local priorities are synthesized across the hierarchy using weighted sums, enabling ranking even with qualitative judgments. AHP's eigenvalue method ensures ratio-scale measurements, distinguishing it from ordinal approaches. Heuristics and metaheuristics approximate solutions to NP-hard problems where exact methods are computationally infeasible. Genetic algorithms, inspired by natural evolution, evolve a population of candidate solutions through iterative generations. Selection favors fitter individuals based on an objective function, probabilistically choosing parents proportional to fitness. Crossover recombines parent solutions by swapping segments, creating offspring that inherit beneficial traits, while mutation randomly alters bits to introduce diversity and escape local optima. Termination occurs after a fixed number of generations or convergence, yielding near-optimal solutions for problems like scheduling or routing.

Applications

Military and Defense

Operations research played a pivotal role in World War II military efforts, particularly in optimizing convoy routing and air defense systems to minimize losses. During the , operations research analysts examined U-boat attack patterns on North Atlantic convoys from 1941 to 1942, determining that the absolute number of ships sunk was independent of convoy size, but the percentage lost decreased significantly with larger formations. This led to recommendations for increasing convoy sizes, which reduced the proportion of vessels lost and enhanced overall shipping safety, as detailed in foundational works like Morse and Kimball's Methods of Operations Research. In air defense, early operations research teams focused on maximizing the effectiveness of limited radar resources, applying scientific methods to improve detection and interception rates against aerial threats, thereby contributing to more efficient resource allocation in defensive operations. During the Cold War, operations research advanced significantly in weapon systems analysis, providing analytical frameworks for evaluating military capabilities and combat effectiveness. The U.S. Army leveraged operations research to model and assess various weapon systems, integrating simulation and optimization techniques to inform procurement and deployment decisions amid escalating tensions. A key contribution was the application of , which model attrition in combat using differential equations to predict outcomes based on force sizes and engagement rates. The basic modern aimed-fire equation is given by: dN1dt=aN2,dN2dt=bN1\frac{dN_1}{dt} = -a N_2, \quad \frac{dN_2}{dt} = -b N_1 where N1N_1 and N2N_2 represent the sizes of opposing forces, and aa and bb are combat effectiveness coefficients; these models were extended during the Cold War to simulate large-scale engagements and guide strategic planning. Such analyses helped prioritize investments in systems like missiles and aircraft, enhancing deterrence postures. In modern military applications, operations research supports drone deployment through optimization models that determine optimal routing, surveillance coverage, and swarm coordination to maximize mission success while minimizing risks. For cyber warfare, simulation-based operations research enables the modeling of adversary behaviors and network vulnerabilities, allowing forces to test defensive strategies and predict attack outcomes in virtual environments prior to real-world deployment. Additionally, operations research enhances supply chain resilience in conflicts by employing stochastic optimization to identify vulnerabilities and develop contingency plans, ensuring timely delivery of resources under disrupted conditions, as demonstrated in U.S. Air Force frameworks for agile logistics. A notable case study is the 1991 Gulf War, where operations research-driven logistics optimization facilitated rapid deployment and sustainment, enabling the coalition to reposition forces over 350 miles in just 18 days using coordinated truck convoys and airlifts, which saved significant time and resources compared to traditional methods. This included handling 2.5 billion gallons of fuel and 41,000 cargo containers efficiently through centralized planning, underscoring operations research's role in reducing operational delays and costs estimated in the millions. Ethical considerations in military targeting models, particularly those incorporating operations research and AI, raise concerns about accountability and proportionality; for instance, automated decision aids may obscure human judgment, potentially leading to unintended civilian harm if models prioritize efficiency over moral constraints. Frameworks like the relative ethical violation model evaluate targeting alternatives by weighing violations of principles such as discrimination and necessity, ensuring operations research aligns with international humanitarian law.

Business and Logistics

Operations research has profoundly influenced business and logistics by providing mathematical models and algorithms to optimize commercial operations, focusing on cost reduction, efficiency gains, and revenue maximization in profit-driven environments. In supply chain management, OR techniques enable firms to balance inventory levels, route vehicles efficiently, and schedule production to meet demand while minimizing operational costs. These applications, rooted in post-war industrial adoption, leverage optimization and stochastic methods to address real-world complexities like variable demand and resource constraints. For instance, seminal models from the mid-20th century continue to form the backbone of modern enterprise systems in manufacturing and distribution. Inventory management represents a cornerstone of OR applications in business, where models determine optimal stock levels to minimize holding and ordering costs amid uncertain demand. The Economic Order Quantity (EOQ) model, introduced by Ford W. Harris in 1913, calculates the ideal order size that balances setup costs and inventory carrying charges. The formula is given by Q=2DSH,Q^* = \sqrt{\frac{2DS}{H}},
Add your contribution
Related Hubs
User Avatar
No comments yet.