Recent from talks
Contribute something
Nothing was collected or created yet.
Problem solving
View on WikipediaThis article needs additional citations for verification. (September 2018) |
| Cognitive psychology |
|---|
| Perception |
| Attention |
| Memory |
| Metacognition |
| Language |
| Metalanguage |
| Thinking |
| Numerical cognition |
| Neuropsychology |
|---|
| Part of a series on |
| Puzzles |
|---|
Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles.[1] Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for.[2] Similarly, one may distinguish formal or fact-based problems requiring psychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such as tactful behavior, fashion, or gift choices.[3]
Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, programmers, and consultants are largely problem solvers for issues that require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop a scalable solution.
There are many specialized problem-solving techniques and methods in fields such as science, engineering, business, medicine, mathematics, computer science, philosophy, and social organization. The mental techniques to identify, analyze, and solve problems are studied in psychology and cognitive sciences. Also widely researched are the mental obstacles that prevent people from finding solutions; problem-solving impediments include confirmation bias, mental set, and functional fixedness.
Definition
[edit]The term problem solving has a slightly different meaning depending on the discipline. For instance, it is a mental process in psychology and a computerized process in computer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems.[2] Solving problems sometimes involves dealing with pragmatics (the way that context contributes to meaning) and semantics (the interpretation of the problem). The ability to understand what the end goal of the problem is, and what rules could be applied, represents the key to solving the problem. Sometimes a problem requires abstract thinking or coming up with a creative solution.
Problem solving has two major domains: mathematical problem solving and personal problem solving. Each concerns some difficulty or barrier that is encountered.[4]
Psychology
[edit]Problem solving in psychology refers to the process of finding solutions to problems encountered in life.[5] Solutions to these problems are usually situation- or context-specific. The process starts with problem finding and problem shaping, in which the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have an end goal to be reached; how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis.[6]
Mental health professionals study the human problem-solving processes using methods such as introspection, behaviorism, simulation, computer modeling, and experiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods.[7] Problem solving has been defined as a higher-order cognitive process and intellectual function that requires the modulation and control of more routine or fundamental skills.[8]
Empirical research shows many different strategies and factors influence everyday problem solving.[9] Rehabilitation psychologists studying people with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems.[10] Interpersonal everyday problem solving is dependent upon personal motivational and contextual components. One such component is the emotional valence of "real-world" problems, which can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving,[11] demonstrating that poor emotional control can disrupt focus on the target task, impede problem resolution, and lead to negative outcomes such as fatigue, depression, and inertia.[12] In conceptualization,[clarification needed]human problem solving consists of two related processes: problem orientation, and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills.[13] People's strategies cohere with their goals[14] and stem from the process of comparing oneself with others.
Cognitive sciences
[edit]Among the first experimental psychologists to study problem solving were the Gestaltists in Germany, such as Karl Duncker in The Psychology of Productive Thinking (1935).[15] Perhaps best known is the work of Allen Newell and Herbert A. Simon.[16]
Experiments in the 1960s and early 1970s asked participants to solve relatively simple, well-defined, but not previously seen laboratory tasks.[17][18] These simple problems, such as the Tower of Hanoi, admitted optimal solutions that could be found quickly, allowing researchers to observe the full problem-solving process. Researchers assumed that these model problems would elicit the characteristic cognitive processes by which more complex "real world" problems are solved.
An outstanding problem-solving technique found by this research is the principle of decomposition.[19]
Computer science
[edit]This section needs expansion. You can help by adding to it. (September 2018) |
Much of computer science and artificial intelligence involves designing automated systems to solve a specified type of problem: to accept input data and calculate a correct or adequate response, reasonably quickly. Algorithms are recipes or instructions that direct such systems, written into computer programs.
Steps for designing such systems include problem determination, heuristics, root cause analysis, de-duplication, analysis, diagnosis, and repair. Analytic techniques include linear and nonlinear programming, queuing systems, and simulation.[20] A large, perennial obstacle is to find and fix errors in computer programs: debugging.
Logic
[edit]Formal logic concerns issues like validity, truth, inference, argumentation, and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution.
The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods such as the resolution principle developed by John Alan Robinson.
In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science. In 1958, John McCarthy proposed the advice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made by Cordell Green in 1969, who used a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning.
The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of that approach from researchers at MIT, Robert Kowalski developed logic programming and SLD resolution,[21] which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving[22] and computational logic to improve human thinking.[23]
Engineering
[edit]When products or processes fail, problem solving techniques can be used to develop corrective actions that can be taken to prevent further failures. Such techniques can also be applied to a product or process prior to an actual failure event—to predict, analyze, and mitigate a potential problem in advance. Techniques such as failure mode and effects analysis can proactively reduce the likelihood of problems.
In either the reactive or the proactive case, it is necessary to build a causal explanation through a process of diagnosis. In deriving an explanation of effects in terms of causes, abduction generates new ideas or hypotheses (asking "how?"); deduction evaluates and refines hypotheses based on other plausible premises (asking "why?"); and induction justifies a hypothesis with empirical data (asking "how much?").[24] The objective of abduction is to determine which hypothesis or proposition to test, not which one to adopt or assert.[25] In the Peircean logical system, the logic of abduction and deduction contribute to our conceptual understanding of a phenomenon, while the logic of induction adds quantitative details (empirical substantiation) to our conceptual knowledge.[26]
Forensic engineering is an important technique of failure analysis that involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures.
Reverse engineering attempts to discover the original problem-solving logic used in developing a product by disassembling the product and developing a plausible pathway to creating and assembling its parts.[27]
Physics
[edit]In physics, problem solving refers to the process by which one transforms an initial physical situation into a goal state by applying physics-specific reasoning and analysis. This involves identifying the relevant physical principles, making assumptions, formulating and manipulating equations, and checking whether the result is reasonable.[28]
A physics problem is not simply application or recall of a formula, but requires understanding the underlying concepts and navigating through a "problem space" of possible knowledge states toward the goal.
Military science
[edit]In military science, problem solving is linked to the concept of "end-states", the conditions or situations which are the aims of the strategy.[29]: xiii, E-2 Ability to solve problems is important at any military rank, but is essential at the command and control level. It results from deep qualitative and quantitative understanding of possible scenarios. Effectiveness in this context is an evaluation of results: to what extent the end states were accomplished.[29]: IV-24 Planning is the process of determining how to effect those end states.[29]: IV-1
Processes
[edit]Some models of problem solving involve identifying a goal and then a sequence of subgoals towards achieving this goal. Andersson, who introduced the ACT-R model of cognition, modelled this collection of goals and subgoals as a goal stack in which the mind contains a stack of goals and subgoals to be completed, and a single task being carried out at any time.[30]: 51
Knowledge of how to solve one problem can be applied to another problem, in a process known as transfer.[30]: 56
Problem-solving strategies
[edit]Problem-solving strategies are steps to overcoming the obstacles to achieving a goal. The iteration of such strategies over the course of solving a problem is the "problem-solving cycle".[31]
Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizing knowledge and resources available, monitoring progress, and evaluating the effectiveness of the solution. Once a solution is achieved, another problem usually arises, and the cycle starts again.
Insight is the sudden aha! solution to a problem, the birth of a new idea to simplify a complex situation. Solutions found through insight are often more incisive than those from step-by-step analysis. A quick solution process requires insight to select productive moves at different stages of the problem-solving cycle. Unlike Newell and Simon's formal definition of a move problem, there is no consensus definition of an insight problem.[32]
Some problem-solving strategies include:[33]
- Abstraction
- solving the problem in a tractable model system to gain insight into the real system
- Analogy
- adapting the solution to a previous problem which has similar features or mechanisms
- Brainstorming
- (especially among groups of people) suggesting a large number of solutions or ideas and combining and developing them until an optimum solution is found
- Bypasses
- transform the problem into another problem that is easier to solve, bypassing the barrier, then transform that solution back to a solution to the original problem.
- Critical thinking
- analysis of available evidence and arguments to form a judgement via rational, skeptical, and unbiased evaluation
- Divide and conquer
- breaking down a large, complex problem into smaller, solvable problems
- Help-seeking
- obtaining external assistance to deal with obstacles
- Hypothesis testing
- assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption
- Lateral thinking
- approaching solutions indirectly and creatively
- Means-ends analysis
- choosing an action at each step to move closer to the goal
- Morphological analysis
- assessing the output and interactions of an entire system
- Observation / Question
- in the natural sciences an observation is an act or instance of noticing or perceiving and the acquisition of information from a primary source. A question is an utterance which serves as a request for information.[citation needed]
- Proof of impossibility
- try to prove that the problem cannot be solved. The point where the proof fails will be the starting point for solving it
- Reduction
- transforming the problem into another problem for which solutions exist
- Research
- employing existing ideas or adapting existing solutions to similar problems
- Root cause analysis
- identifying the cause of a problem
- Trial-and-error
- testing possible solutions until the right one is found
Problem-solving methods
[edit]- A3 problem solving – Structured problem improvement approach
- Design thinking – Processes by which design concepts are developed
- Eight Disciplines Problem Solving – Eight disciplines of team-oriented problem solving method
- GROW model – Method for goal setting and problem solving
- Help-seeking – Theory in psychology
- How to Solve It – Book by George Pólya
- Lateral thinking – Manner of solving problems
- OODA loop – Observe–orient–decide–act cycle
- PDCA – Iterative design and management method
- Root cause analysis – Method of identifying the fundamental causes of faults or problems
- RPR problem diagnosis
- TRIZ – Problem-solving tools
- Scientific method – is an empirical method for acquiring knowledge that has characterized the development of science.
- Swarm intelligence – Collective behavior of decentralized, self-organized systems
- System dynamics – Study of non-linear complex systems
Common barriers
[edit]Common barriers to problem solving include mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are: confirmation bias, mental set, functional fixedness, unnecessary constraints, and irrelevant information.
Confirmation bias
[edit]Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by important personal beliefs: the desire to be right may be sufficient motivation.[34]
Scientific and technical professionals also experience confirmation bias. One online experiment, for example, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than clashing studies.[35] According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused of witchcraft demonstrated confirmation bias with motivation.[36] Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments to produce favorable results.[37]
However, confirmation bias does not necessarily require motivation. In 1960, Peter Cathcart Wason conducted an experiment in which participants first viewed three numbers and then created a hypothesis in the form of a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses.[38]
Mental set
[edit]Mental set is the inclination to re-use a previously successful solution, rather than search for new and better solutions. It is a reliance on habit.
It was first articulated by Abraham S. Luchins in the 1940s with his well-known water jug experiments.[39] Participants were asked to fill one jug with a specific amount of water by using other jugs with different maximum capacities. After Luchins gave a set of jug problems that could all be solved by a single technique, he then introduced a problem that could be solved by the same technique, but also by a novel and simpler method. His participants tended to use the accustomed technique, oblivious of the simpler alternative.[40] This was again demonstrated in Norman Maier's 1931 experiment, which challenged participants to solve a problem by using a familiar tool (pliers) in an unconventional manner. Participants were often unable to view the object in a way that strayed from its typical use, a type of mental set known as functional fixedness (see the following section).
Rigidly clinging to a mental set is called fixation, which can deepen to an obsession or preoccupation with attempted strategies that are repeatedly unsuccessful.[41] In the late 1990s, researcher Jennifer Wiley found that professional expertise in a field can create a mental set, perhaps leading to fixation.[41]
Groupthink, in which each individual takes on the mindset of the rest of the group, can produce and exacerbate mental set.[42] Social pressure leads to everybody thinking the same thing and reaching the same conclusions.
Functional fixedness
[edit]Functional fixedness is the tendency to view an object as having only one function, and to be unable to conceive of any novel use, as in the Maier pliers experiment described above. Functional fixedness is a specific form of mental set, and is one of the most common forms of cognitive bias in daily life.
As an example, imagine a man wants to kill a bug in his house, but the only thing at hand is a can of air freshener. He may start searching for something to kill the bug instead of squashing it with the can, thinking only of its main function of deodorizing.
Tim German and Clark Barrett describe this barrier: "subjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated."[43] Their research found that young children's limited knowledge of an object's intended function reduces this barrier[44] Research has also discovered functional fixedness in educational contexts, as an obstacle to understanding: "functional fixedness may be found in learning concepts as well as in solving chemistry problems."[45]
There are several hypotheses in regards to how functional fixedness relates to problem solving.[46] It may waste time, delaying or entirely preventing the correct use of a tool.
Unnecessary constraints
[edit]Unnecessary constraints are arbitrary boundaries imposed unconsciously on the task at hand, which foreclose a productive avenue of solution. The solver may become fixated on only one type of solution, as if it were an inevitable requirement of the problem. Typically, this combines with mental set—clinging to a previously successful method.[47][page needed]
Visual problems can also produce mentally invented constraints.[48][page needed] A famous example is the dot problem: nine dots arranged in a three-by-three grid pattern must be connected by drawing four straight line segments, without lifting pen from paper or backtracking along a line. The subject typically assumes the pen must stay within the outer square of dots, but the solution requires lines continuing beyond this frame, and researchers have found a 0% solution rate within a brief allotted time.[49]
This problem has produced the expression "think outside the box".[50][page needed] Such problems are typically solved via a sudden insight which leaps over the mental barriers, often after long toil against them.[51] This can be difficult depending on how the subject has structured the problem in their mind, how they draw on past experiences, and how well they juggle this information in their working memory. In the example, envisioning the dots connected outside the framing square requires visualizing an unconventional arrangement, which is a strain on working memory.[50]
Irrelevant information
[edit]Irrelevant information is a specification or data presented in a problem that is unrelated to the solution.[47] If the solver assumes that all information presented needs to be used, this often derails the problem solving process, making relatively simple problems much harder.[52]
For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?"[50][page needed] The "obvious" answer is 15%, but in fact none of the unlisted people would be listed among the 200. This kind of "trick question" is often used in aptitude tests or cognitive evaluations.[53] Though not inherently difficult, they require independent thinking that is not necessarily common. Mathematical word problems often include irrelevant qualitative or numerical information as an extra challenge.
Avoiding barriers by changing problem representation
[edit]The disruption caused by the above cognitive biases can depend on how the information is represented:[53] visually, verbally, or mathematically. A classic example is the Buddhist monk problem:
A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys.
The problem cannot be addressed in a verbal context, trying to describe the monk's progress on each day. It becomes much easier when the paragraph is represented mathematically by a function: one visualizes a graph whose horizontal axis is time of day, and whose vertical axis shows the monk's position (or altitude) on the path at each time. Superimposing the two journey curves, which traverse opposite diagonals of a rectangle, one sees they must cross each other somewhere. The visual representation by graphing has resolved the difficulty.
Similar strategies can often improve problem solving on tests.[47][54]
Other barriers for individuals
[edit]People who are engaged in problem solving tend to overlook subtractive changes, even those that are critical elements of efficient solutions. For example, a city planner may decide that the solution to decrease traffic congestion would be to add another lane to a highway, rather than finding ways to reduce the need for the highway in the first place. This tendency to solve by first, only, or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with higher cognitive loads such as information overload.[55]
Dreaming: problem solving without waking consciousness
[edit]People can also solve problems while they are asleep. There are many reports of scientists and engineers who solved problems in their dreams. For example, Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream.[56]
The chemist August Kekulé was considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary,
One of the snakes seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke; and this time also I spent the rest of the night in working out the consequences of the hypothesis.[57]
There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcher William C. Dement told his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be.[58][page needed] He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning.
The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream:[58][page needed]
I was standing in an art gallery, looking at the paintings on the wall. As I walked down the hall, I began to count the paintings: one, two, three, four, five. As I came to the sixth and seventh, the paintings had been ripped from their frames. I stared at the empty frames with a peculiar feeling that some mystery was about to be solved. Suddenly I realized that the sixth and seventh spaces were the solution to the problem!
With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution.
Albert Einstein believed that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain[jargon] has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution."[59] Einstein said that he did his problem solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined."[60]
Cognitive sciences: two schools
[edit]Problem-solving processes differ across knowledge domains and across levels of expertise.[61] For this reason, cognitive sciences findings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory. This has led to a research emphasis on real-world problem solving, since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios.[62]
Europe
[edit]In Europe, two main approaches have surfaced, one initiated by Donald Broadbent[63] in the United Kingdom and the other one by Dietrich Dörner[64] in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables.[65]
North America
[edit]In North America, initiated by the work of Herbert A. Simon on "learning by doing" in semantically rich domains,[66] researchers began to investigate problem solving separately in different natural knowledge domains—such as physics, writing, or chess playing—rather than attempt to extract a global theory of problem solving.[67] These researchers have focused on the development of problem solving within certain domains, that is on the development of expertise.[68]
Areas that have attracted rather intensive attention in North America include:
- calculation[69]
- computer skills[70]
- game playing[71]
- lawyers' reasoning[72]
- managerial problem solving[73]
- physical problem solving
- mathematical problem solving[74]
- mechanical problem solving[75]
- personal problem solving[76]
- political decision making[77]
- problem solving in electronics[78]
- problem solving for innovations and inventions: TRIZ[79]
- reading[80]
- social problem solving[11]
- writing[81]
Characteristics of complex problems
[edit]Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). In SPS there is a singular and simple obstacle. In CPS there may be multiple simultaneous obstacles. For example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics, which include:[1]
- complexity (large numbers of items, interrelations, and decisions)
- enumerability[clarification needed]
- heterogeneity[specify]
- connectivity (hierarchy relation, communication relation, allocation relation)[clarification needed]
- dynamics (time considerations)[clarification needed]
- temporal constraints
- temporal sensitivity[clarification needed]
- phase effects[definition needed]
- dynamic unpredictability[specify]
- intransparency (lack of clarity of the situation)
- commencement opacity[definition needed]
- continuation opacity[definition needed]
- polytely (multiple goals)[82]
Collective problem solving
[edit]People solve problems on many different levels—from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively. Social issues and global issues can typically only be solved collectively.
The complexity of contemporary problems exceeds the cognitive capacity of any individual and requires different but complementary varieties of expertise and collective problem solving ability.[83]
Collective intelligence is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals.
In collaborative problem solving people work together to solve real-world problems. Members of problem-solving groups share a common concern, a similar passion, and/or a commitment to their work. Members can ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods.[84] Groups may be fluid based on need, may only occur temporarily to finish an assigned task, or may be more permanent depending on the nature of the problems.
For example, in the educational context, members of a group may all have input into the decision-making process and a role in the learning process. Members may be responsible for the thinking, teaching, and monitoring of all members in the group. Group work may be coordinated among members so that each member makes an equal contribution to the whole work. Members can identify and build on their individual strengths so that everyone can make a significant contribution to the task.[85] Collaborative group work has the ability to promote critical thinking skills, problem solving skills, social skills, and self-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work.[86]
Collaborative groups require joint intellectual efforts between the members and involve social interactions to solve problems together. The knowledge shared during these interactions is acquired during communication, negotiation, and production of materials.[87] Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems.[88]
In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that proactively "augmenting human intellect" would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[89]
Henry Jenkins, a theorist of new media and media convergence, draws on the theory that collective intelligence can be attributed to media convergence and participatory culture.[90] He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills.[91]
Collective impact is the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration.
After World War II the UN, the Bretton Woods organization, and the WTO were created. Collective problem solving on the international level crystallized around these three types of organization from the 1980s onward. As these global institutions remain state-like or state-centric it is unsurprising that they perpetuate state-like or state-centric approaches to collective problem solving rather than alternative ones.[92]
Crowdsourcing is a process of accumulating ideas, thoughts, or information from many independent participants, with aim of finding the best solution for a given challenge. Modern information technologies allow for many people to be involved and facilitate managing their suggestions in ways that provide good results.[93] The Internet allows for a new capacity of collective (including planetary-scale) problem solving.[94]
See also
[edit]- Actuarial science – Statistics applied to risk in insurance and other financial products
- Analytical skill – Crucial skill in all different fields of work and life
- Creative problem-solving – Mental process of problem solving
- Collective intelligence – Group intelligence that emerges from collective efforts
- Community of practice
- Coworking – Practice of independent contractors or scientists sharing office space without supervision
- Crowdsolving – Sourcing services or funds from a group
- Divergent thinking – Process of generating creative ideas
- Grey problem
- Innovation – Practical implementation of improvements
- Instrumentalism – Position in the philosophy of science
- Problem-posing education – Method of teaching coined by Paulo Freire
- Problem statement – Description of an issue
- Problem structuring methods
- Shared intentionality – Ability to engage with others' psychological states
- Structural fix
- Subgoal labeling – Cognitive process
- Troubleshooting – Form of problem solving, often applied to repair failed products or processes
- Wicked problem – Problem that is difficult or impossible to solve
Notes
[edit]- ^ a b Frensch, Peter A.; Funke, Joachim, eds. (2014-04-04). Complex Problem Solving. Psychology Press. doi:10.4324/9781315806723. ISBN 978-1-315-80672-3.
- ^ a b Schacter, D.L.; Gilbert, D.T.; Wegner, D.M. (2011). Psychology (2nd ed.). New York: Worth Publishers. p. 376.
- ^ Blanchard-Fields, F. (2007). "Everyday problem solving and emotion: An adult developmental perspective". Current Directions in Psychological Science. 16 (1): 26–31. doi:10.1111/j.1467-8721.2007.00469.x. S2CID 145645352.
- ^ Zimmermann, Bernd (2004). On mathematical problem-solving processes and history of mathematics. ICME 10. Copenhagen.
- ^ Granvold, Donald K. (1997). "Cognitive-Behavioral Therapy with Adults". In Brandell, Jerrold R. (ed.). Theory and Practice in Clinical Social Work. Simon and Schuster. pp. 189. ISBN 978-0-684-82765-0.
- ^ Robertson, S. Ian (2001). "Introduction to the study of problem solving". Problem Solving. Psychology Press. ISBN 0-415-20300-7.
- ^ Rubin, M.; Watt, S. E.; Ramelli, M. (2012). "Immigrants' social integration as a function of approach-avoidance orientation and problem-solving style". International Journal of Intercultural Relations. 36 (4): 498–505. doi:10.1016/j.ijintrel.2011.12.009. hdl:1959.13/931119.
- ^ Goldstein F. C.; Levin H. S. (1987). "Disorders of reasoning and problem-solving ability". In M. Meier; A. Benton; L. Diller (eds.). Neuropsychological rehabilitation. London: Taylor & Francis Group.
- ^
- Vallacher, Robin; M. Wegner, Daniel (2012). "Action Identification Theory". Handbook of Theories of Social Psychology. pp. 327–348. doi:10.4135/9781446249215.n17. ISBN 978-0-85702-960-7.
- Margrett, J. A; Marsiske, M (2002). "Gender differences in older adults' everyday cognitive collaboration". International Journal of Behavioral Development. 26 (1): 45–59. doi:10.1080/01650250143000319. PMC 2909137. PMID 20657668.
- Antonucci, T. C; Ajrouch, K. J; Birditt, K. S (2013). "The Convoy Model: Explaining Social Relations From a Multidisciplinary Perspective". The Gerontologist. 54 (1): 82–92. doi:10.1093/geront/gnt118. PMC 3894851. PMID 24142914.
- ^ Rath, Joseph F.; Simon, Dvorah; Langenbahn, Donna M.; Sherr, Rose Lynn; Diller, Leonard (2003). "Group treatment of problem-solving deficits in outpatients with traumatic brain injury: A randomised outcome study". Neuropsychological Rehabilitation. 13 (4): 461–488. doi:10.1080/09602010343000039. S2CID 143165070.
- ^ a b
- D'Zurilla, T. J.; Goldfried, M. R. (1971). "Problem solving and behavior modification". Journal of Abnormal Psychology. 78 (1): 107–126. doi:10.1037/h0031360. PMID 4938262.
- D'Zurilla, T. J.; Nezu, A. M. (1982). "Social problem solving in adults". In P. C. Kendall (ed.). Advances in cognitive-behavioral research and therapy. Vol. 1. New York: Academic Press. pp. 201–274.
- ^ Rath, J. F.; Langenbahn, D. M.; Simon, D; Sherr, R. L.; Fletcher, J.; Diller, L. (2004). "The construct of problem solving in higher level neuropsychological assessment and rehabilitation*1". Archives of Clinical Neuropsychology. 19 (5): 613–635. doi:10.1016/j.acn.2003.08.006. PMID 15271407.
- ^ Rath, Joseph F.; Hradil, Amy L.; Litke, David R.; Diller, Leonard (2011). "Clinical applications of problem-solving research in neuropsychological rehabilitation: Addressing the subjective experience of cognitive deficits in outpatients with acquired brain injury". Rehabilitation Psychology. 56 (4): 320–328. doi:10.1037/a0025817. ISSN 1939-1544. PMC 9728040. PMID 22121939.
- ^ Hoppmann, Christiane A.; Blanchard-Fields, Fredda (2010). "Goals and everyday problem solving: Manipulating goal preferences in young and older adults". Developmental Psychology. 46 (6): 1433–1443. doi:10.1037/a0020676. PMID 20873926.
- ^ Duncker, Karl (1935). Zur Psychologie des produktiven Denkens [The psychology of productive thinking] (in German). Berlin: Julius Springer.
- ^ Newell, Allen; Simon, Herbert A. (1972). Human problem solving. Englewood Cliffs, N.J.: Prentice-Hall.
- ^ For example:
- X-ray problem, by Duncker, Karl (1935). Zur Psychologie des produktiven Denkens [The psychology of productive thinking] (in German). Berlin: Julius Springer.
- Disk problem, later known as Tower of Hanoi, by Ewert, P. H.; Lambert, J. F. (1932). "Part II: The Effect of Verbal Instructions upon the Formation of a Concept". The Journal of General Psychology. 6 (2). Informa UK Limited: 400–413. doi:10.1080/00221309.1932.9711880. ISSN 0022-1309. Archived from the original on 2020-08-06. Retrieved 2019-06-09.
- ^ Mayer, R. E. (1992). Thinking, problem solving, cognition (Second ed.). New York: W. H. Freeman and Company.
- ^ Armstrong, J. Scott; Denniston, William B. Jr.; Gordon, Matt M. (1975). "The Use of the Decomposition Principle in Making Judgments" (PDF). Organizational Behavior and Human Performance. 14 (2): 257–263. doi:10.1016/0030-5073(75)90028-8. S2CID 122659209. Archived from the original (PDF) on 2010-06-20.
- ^ Malakooti, Behnam (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons. ISBN 978-1-118-58537-5.
- ^ Kowalski, Robert (1974). "Predicate Logic as a Programming Language" (PDF). Information Processing. 74. Archived (PDF) from the original on 2024-01-19. Retrieved 2023-09-20.
- ^ Kowalski, Robert (1979). Logic for Problem Solving (PDF). Artificial Intelligence Series. Vol. 7. Elsevier Science Publishing. ISBN 0-444-00368-1. Archived (PDF) from the original on 2023-11-02. Retrieved 2023-09-20.
- ^ Kowalski, Robert (2011). Computational Logic and Human Thinking: How to be Artificially Intelligent (PDF). Cambridge University Press. Archived (PDF) from the original on 2024-06-01. Retrieved 2023-09-20.
- ^ Staat, Wim (1993). "On abduction, deduction, induction and the categories". Transactions of the Charles S. Peirce Society. 29 (2): 225–237.
- ^ Sullivan, Patrick F. (1991). "On Falsificationist Interpretations of Peirce". Transactions of the Charles S. Peirce Society. 27 (2): 197–219.
- ^ Ho, Yu Chong (1994). Abduction? Deduction? Induction? Is There a Logic of Exploratory Data Analysis? (PDF). Annual Meeting of the American Educational Research Association. New Orleans, La. Archived (PDF) from the original on 2023-11-02. Retrieved 2023-09-20.
- ^ Passuello, Luciano (2008-11-04). "Einstein's Secret to Amazing Problem Solving (and 10 Specific Ways You Can Use It)". Litemind. Archived from the original on 2017-06-21. Retrieved 2017-06-11.
- ^ Tschisgale, Paul; Kubsch, Marcus; Wulff, Peter; Petersen, Stefan; Neumann, Knut (2025-01-31). "Exploring the sequential structure of students' physics problem-solving approaches using process mining and sequence analysis". Physical Review Physics Education Research. 21 (1). doi:10.1103/PhysRevPhysEducRes.21.010111. ISSN 2469-9896.
- ^ a b c "Commander's Handbook for Strategic Communication and Communication Strategy" (PDF). United States Joint Forces Command, Joint Warfighting Center, Suffolk, Va. 27 October 2009. Archived from the original (PDF) on April 29, 2011. Retrieved 10 October 2016.
- ^ a b Robertson, S. Ian (2017). Problem solving: perspectives from cognition and neuroscience (2nd ed.). London: Taylor & Francis. ISBN 978-1-317-49601-4. OCLC 962750529.
- ^ Bransford, J. D.; Stein, B. S (1993). The ideal problem solver: A guide for improving thinking, learning, and creativity (2nd ed.). New York: W.H. Freeman.
- ^
- Ash, Ivan K.; Jee, Benjamin D.; Wiley, Jennifer (2012). "Investigating Insight as Sudden Learning". The Journal of Problem Solving. 4 (2). doi:10.7771/1932-6246.1123. ISSN 1932-6246.
- Chronicle, Edward P.; MacGregor, James N.; Ormerod, Thomas C. (2004). "What Makes an Insight Problem? The Roles of Heuristics, Goal Conception, and Solution Recoding in Knowledge-Lean Problems" (PDF). Journal of Experimental Psychology: Learning, Memory, and Cognition. 30 (1): 14–27. doi:10.1037/0278-7393.30.1.14. ISSN 1939-1285. PMID 14736293. S2CID 15631498.
- Chu, Yun; MacGregor, James N. (2011). "Human Performance on Insight Problem Solving: A Review". The Journal of Problem Solving. 3 (2). doi:10.7771/1932-6246.1094. ISSN 1932-6246.
- ^ Wang, Y.; Chiew, V. (2010). "On the cognitive process of human problem solving" (PDF). Cognitive Systems Research. 11 (1). Elsevier BV: 81–92. doi:10.1016/j.cogsys.2008.08.003. ISSN 1389-0417. S2CID 16238486.
- ^ Nickerson, Raymond S. (1998). "Confirmation bias: A ubiquitous phenomenon in many guises". Review of General Psychology. 2 (2): 176. doi:10.1037/1089-2680.2.2.175. S2CID 8508954.
- ^ Hergovich, Andreas; Schott, Reinhard; Burger, Christoph (2010). "Biased Evaluation of Abstracts Depending on Topic and Conclusion: Further Evidence of a Confirmation Bias Within Scientific Psychology". Current Psychology. 29 (3). Springer Science and Business Media LLC: 188–209. doi:10.1007/s12144-010-9087-5. ISSN 1046-1310. S2CID 145497196.
- ^ Nickerson, Raymond (1998). "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises". Review of General Psychology. 2 (2). American Psychological Association: 175–220. doi:10.1037/1089-2680.2.2.175.
- ^ Allen, Michael (2011). "Theory-led confirmation bias and experimental persona". Research in Science & Technological Education. 29 (1). Informa UK Limited: 107–127. Bibcode:2011RSTEd..29..107A. doi:10.1080/02635143.2010.539973. ISSN 0263-5143. S2CID 145706148.
- ^ Wason, P. C. (1960). "On the failure to eliminate hypotheses in a conceptual task". Quarterly Journal of Experimental Psychology. 12 (3): 129–140. doi:10.1080/17470216008416717. S2CID 19237642.
- ^ Luchins, Abraham S. (1942). "Mechanization in problem solving: The effect of Einstellung". Psychological Monographs. 54 (248): i-95. doi:10.1037/h0093502.
- ^ Öllinger, Michael; Jones, Gary; Knoblich, Günther (2008). "Investigating the Effect of Mental Set on Insight Problem Solving" (PDF). Experimental Psychology. 55 (4). Hogrefe Publishing Group: 269–282. doi:10.1027/1618-3169.55.4.269. ISSN 1618-3169. PMID 18683624. Archived (PDF) from the original on 2023-03-16. Retrieved 2023-01-31.
- ^ a b Wiley, Jennifer (1998). "Expertise as mental set: The effects of domain knowledge in creative problem solving". Memory & Cognition. 24 (4): 716–730. doi:10.3758/bf03211392. PMID 9701964.
- ^ Cottam, Martha L.; Dietz-Uhler, Beth; Mastors, Elena; Preston, Thomas (2010). Introduction to Political Psychology (2nd ed.). New York: Psychology Press.
- ^ German, Tim P.; Barrett, H. Clark (2005). "Functional Fixedness in a Technologically Sparse Culture". Psychological Science. 16 (1). SAGE Publications: 1–5. doi:10.1111/j.0956-7976.2005.00771.x. ISSN 0956-7976. PMID 15660843. S2CID 1833823.
- ^ German, Tim P.; Defeyter, Margaret A. (2000). "Immunity to functional fixedness in young children". Psychonomic Bulletin and Review. 7 (4): 707–712. doi:10.3758/BF03213010. PMID 11206213.
- ^ Furio, C.; Calatayud, M. L.; Baracenas, S.; Padilla, O. (2000). "Functional fixedness and functional reduction as common sense reasonings in chemical equilibrium and in geometry and polarity of molecules". Science Education. 84 (5): 545–565. Bibcode:2000SciEd..84..545F. doi:10.1002/1098-237X(200009)84:5<545::AID-SCE1>3.0.CO;2-1.
- ^ Adamson, Robert E (1952). "Functional fixedness as related to problem solving: A repetition of three experiments". Journal of Experimental Psychology. 44 (4): 288–291. doi:10.1037/h0062487. PMID 13000071.
- ^ a b c Kellogg, R. T. (2003). Cognitive psychology (2nd ed.). California: Sage Publications, Inc.
- ^ Meloy, J. R. (1998). The Psychology of Stalking, Clinical and Forensic Perspectives (2nd ed.). London, England: Academic Press.
- ^ MacGregor, J.N.; Ormerod, T.C.; Chronicle, E.P. (2001). "Information-processing and insight: A process model of performance on the nine-dot and related problems". Journal of Experimental Psychology: Learning, Memory, and Cognition. 27 (1): 176–201. doi:10.1037/0278-7393.27.1.176. PMID 11204097.
- ^ a b c Weiten, Wayne (2011). Psychology: themes and variations (8th ed.). California: Wadsworth.
- ^ Novick, L. R.; Bassok, M. (2005). "Problem solving". In Holyoak, K. J.; Morrison, R. G. (eds.). Cambridge handbook of thinking and reasoning. New York, N.Y.: Cambridge University Press. pp. 321–349.
- ^ Walinga, Jennifer (2010). "From walls to windows: Using barriers as pathways to insightful solutions". The Journal of Creative Behavior. 44 (3): 143–167. doi:10.1002/j.2162-6057.2010.tb01331.x.
- ^ a b Walinga, Jennifer; Cunningham, J. Barton; MacGregor, James N. (2011). "Training insight problem solving through focus on barriers and assumptions". The Journal of Creative Behavior. 45: 47–58. doi:10.1002/j.2162-6057.2011.tb01084.x.
- ^ Vlamings, Petra H. J. M.; Hare, Brian; Call, Joseph (2009). "Reaching around barriers: The performance of great apes and 3–5-year-old children". Animal Cognition. 13 (2): 273–285. doi:10.1007/s10071-009-0265-5. PMC 2822225. PMID 19653018.
- ^
- Gupta, Sujata (7 April 2021). "People add by default even when subtraction makes more sense". Science News. Archived from the original on 21 May 2021. Retrieved 10 May 2021.
- Adams, Gabrielle S.; Converse, Benjamin A.; Hales, Andrew H.; Klotz, Leidy E. (April 2021). "People systematically overlook subtractive changes". Nature. 592 (7853): 258–261. Bibcode:2021Natur.592..258A. doi:10.1038/s41586-021-03380-y. ISSN 1476-4687. PMID 33828317. S2CID 233185662. Archived from the original on 10 May 2021. Retrieved 10 May 2021.
- ^ Kaempffert, Waldemar B. (1924). A Popular History of American Invention. Vol. 2. New York: Charles Scribner's Sons. p. 385.
- ^
- Kekulé, August (1890). "Benzolfest-Rede". Berichte der Deutschen Chemischen Gesellschaft. 23: 1302–1311.
- Benfey, O. (1958). "Kekulé and the birth of the structural theory of organic chemistry in 1858". Journal of Chemical Education. 35 (1): 21–23. Bibcode:1958JChEd..35...21B. doi:10.1021/ed035p21.
- ^ a b Dement, W.C. (1972). Some Must Watch While Some Just Sleep. New York: Freeman.
- ^ Fromm, Erika O. (1998). "Lost and found half a century later: Letters by Freud and Einstein". American Psychologist. 53 (11): 1195–1198. doi:10.1037/0003-066x.53.11.1195.
- ^ Einstein, Albert (1954). "A Mathematician's Mind". Ideas and Opinions. New York: Bonanza Books. p. 25.
- ^ Sternberg, R. J. (1995). "Conceptions of expertise in complex problem solving: A comparison of alternative conceptions". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 295–321.
- ^ Funke, J. (1991). "Solving complex problems: Human identification and control of complex systems". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 185–222. ISBN 0-8058-0650-4. OCLC 23254443.
- ^
- Broadbent, Donald E. (1977). "Levels, hierarchies, and the locus of control". Quarterly Journal of Experimental Psychology. 29 (2): 181–201. doi:10.1080/14640747708400596. S2CID 144328372. Archived from the original on 2020-08-06. Retrieved 2019-06-09.
- Berry, Dianne C.; Broadbent, Donald E. (1995). "Implicit learning in the control of complex systems: A reconsideration of some of the earlier claims". In Frensch, P.A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 131–150.
- ^
- Dörner, Dietrich (1975). "Wie Menschen eine Welt verbessern wollten" [How people wanted to improve the world]. Bild der Wissenschaft (in German). 12: 48–53.
- Dörner, Dietrich (1985). "Verhalten, Denken und Emotionen" [Behavior, thinking, and emotions]. In Eckensberger, L. H.; Lantermann, E. D. (eds.). Emotion und Reflexivität (in German). München, Germany: Urban & Schwarzenberg. pp. 157–181.
- Dörner, Dietrich; Wearing, Alex J. (1995). "Complex problem solving: Toward a (computer-simulated) theory". In Frensch, P.A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 65–99.
- ^
- Buchner, A. (1995). "Theories of complex problem solving". In Frensch, P.A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 27–63.
- Dörner, D.; Kreuzig, H. W.; Reither, F.; Stäudel, T., eds. (1983). Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität [Lohhausen. On dealing with uncertainty and complexity] (in German). Bern, Switzerland: Hans Huber.
- Ringelband, O. J.; Misiak, C.; Kluwe, R. H. (1990). "Mental models and strategies in the control of a complex system". In Ackermann, D.; Tauber, M. J. (eds.). Mental models and human-computer interaction. Vol. 1. Amsterdam: Elsevier Science Publishers. pp. 151–164.
- ^
- Anzai, K.; Simon, H. A. (1979). "The theory of learning by doing". Psychological Review. 86 (2): 124–140. doi:10.1037/0033-295X.86.2.124. PMID 493441.
- Bhaskar, R.; Simon, Herbert A. (1977). "Problem Solving in Semantically Rich Domains: An Example from Engineering Thermodynamics". Cognitive Science. 1 (2). Wiley: 193–215. doi:10.1207/s15516709cog0102_3. ISSN 0364-0213.
- ^ e.g., Sternberg, R. J.; Frensch, P. A., eds. (1991). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. ISBN 0-8058-0650-4. OCLC 23254443.
- ^
- Chase, W. G.; Simon, H. A. (1973). "Perception in chess". Cognitive Psychology. 4: 55–81. doi:10.1016/0010-0285(73)90004-2.
- Chi, M. T. H.; Feltovich, P. J.; Glaser, R. (1981). "Categorization and representation of physics problems by experts and novices". Cognitive Science. 5 (2): 121–152. doi:10.1207/s15516709cog0502_2.
- Anderson, J. R.; Boyle, C. B.; Reiser, B. J. (1985). "Intelligent tutoring systems". Science. 228 (4698): 456–462. Bibcode:1985Sci...228..456A. doi:10.1126/science.228.4698.456. PMID 17746875. S2CID 62403455.
- ^ Sokol, S. M.; McCloskey, M. (1991). "Cognitive mechanisms in calculation". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 85–116. ISBN 0-8058-0650-4. OCLC 23254443.
- ^ Kay, D. S. (1991). "Computer interaction: Debugging the problems". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 317–340. ISBN 0-8058-0650-4. OCLC 23254443. Archived from the original on 2022-12-04. Retrieved 2022-12-04.
- ^ Frensch, P. A.; Sternberg, R. J. (1991). "Skill-related differences in game playing". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J .: Lawrence Erlbaum Associates. pp. 343–381. ISBN 0-8058-0650-4. OCLC 23254443.
- ^ Amsel, E.; Langer, R.; Loutzenhiser, L. (1991). "Do lawyers reason differently from psychologists? A comparative design for studying expertise". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 223–250. ISBN 0-8058-0650-4. OCLC 23254443.
- ^ Wagner, R. K. (1991). "Managerial problem solving". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 159–183. PsycNET: 1991-98396-005.
- ^
- Pólya, George (1945). How to Solve It. Princeton University Press.
- Schoenfeld, A. H. (1985). Mathematical Problem Solving. Orlando, Fla.: Academic Press. ISBN 978-1-4832-9548-0. Archived from the original on 2023-10-23. Retrieved 2019-06-09.
- ^ Hegarty, M. (1991). "Knowledge and processes in mechanical problem solving". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 253–285. ISBN 0-8058-0650-4. OCLC 23254443. Archived from the original on 2022-12-04. Retrieved 2022-12-04.
- ^ Heppner, P. P.; Krauskopf, C. J. (1987). "An information-processing approach to personal problem solving". The Counseling Psychologist. 15 (3): 371–447. doi:10.1177/0011000087153001. S2CID 146180007.
- ^ Voss, J. F.; Wolfe, C. R.; Lawrence, J. A.; Engle, R. A. (1991). "From representation to decision: An analysis of problem solving in international relations". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 119–158. ISBN 0-8058-0650-4. OCLC 23254443. PsycNET: 1991-98396-004.
- ^ Lesgold, A.; Lajoie, S. (1991). "Complex problem solving in electronics". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 287–316. ISBN 0-8058-0650-4. OCLC 23254443. Archived from the original on 2022-12-04. Retrieved 2022-12-04.
- ^ Altshuller, Genrich (1994). And Suddenly the Inventor Appeared. Translated by Lev Shulyak. Worcester, Mass.: Technical Innovation Center. ISBN 978-0-9640740-1-9.
- ^ Stanovich, K. E.; Cunningham, A. E. (1991). "Reading as constrained reasoning". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 3–60. ISBN 0-8058-0650-4. OCLC 23254443. Archived from the original on 2023-09-03. Retrieved 2022-12-04.
- ^ Bryson, M.; Bereiter, C.; Scardamalia, M.; Joram, E. (1991). "Going beyond the problem as given: Problem solving in expert and novice writers". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 61–84. ISBN 0-8058-0650-4. OCLC 23254443.
- ^ Sternberg, R. J.; Frensch, P. A., eds. (1991). Complex problem solving: Principles and mechanisms. Hillsdale, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-0650-4. OCLC 23254443.
- ^ Hung, Woei (2013). "Team-based complex problem solving: a collective cognition perspective". Educational Technology Research and Development. 61 (3): 365–384. doi:10.1007/s11423-013-9296-3. S2CID 62663840.
- ^ Jewett, Pamela; MacPhee, Deborah (2012). "Adding Collaborative Peer Coaching to Our Teaching Identities". The Reading Teacher. 66 (2): 105–110. doi:10.1002/TRTR.01089.
- ^ Wang, Qiyun (2009). "Design and Evaluation of a Collaborative Learning Environment". Computers and Education. 53 (4): 1138–1146. doi:10.1016/j.compedu.2009.05.023.
- ^ Wang, Qiyan (2010). "Using online shared workspaces to support group collaborative learning". Computers and Education. 55 (3): 1270–1276. doi:10.1016/j.compedu.2010.05.023.
- ^ Kai-Wai Chu, Samuel; Kennedy, David M. (2011). "Using Online Collaborative tools for groups to Co-Construct Knowledge". Online Information Review. 35 (4): 581–597. doi:10.1108/14684521111161945. ISSN 1468-4527. S2CID 206388086.
- ^ Legare, Cristine; Mills, Candice; Souza, Andre; Plummer, Leigh; Yasskin, Rebecca (2013). "The use of questions as problem-solving strategies during early childhood". Journal of Experimental Child Psychology. 114 (1): 63–7. doi:10.1016/j.jecp.2012.07.002. PMID 23044374.
- ^ Engelbart, Douglas (1962). "Team Cooperation". Augmenting Human Intellect: A Conceptual Framework. Vol. AFOSR-3223. Stanford Research Institute.
- ^ Flew, Terry (2008). New Media: an introduction. Melbourne: Oxford University Press.
- ^ Henry, Jenkins. "Interactive audiences? The 'collective intelligence' of media fans" (PDF). Archived from the original (PDF) on April 26, 2018. Retrieved December 11, 2016.
- ^ Finger, Matthias (2008-03-27). "Which governance for sustainable development? An organizational and institutional perspective". In Park, Jacob; Conca, Ken; Finger, Matthias (eds.). The Crisis of Global Environmental Governance: Towards a New Political Economy of Sustainability. Routledge. p. 48. ISBN 978-1-134-05982-9.
- ^
- Guazzini, Andrea; Vilone, Daniele; Donati, Camillo; Nardi, Annalisa; Levnajić, Zoran (10 November 2015). "Modeling crowdsourcing as collective problem solving". Scientific Reports. 5 16557. arXiv:1506.09155. Bibcode:2015NatSR...516557G. doi:10.1038/srep16557. PMC 4639727. PMID 26552943.
- Boroomand, A.; Smaldino, P.E. (2021). "Hard Work, Risk-Taking, and Diversity in a Model of Collective Problem Solving". Journal of Artificial Societies and Social Simulation. 24 (4) 10. doi:10.18564/jasss.4704. S2CID 240483312.
- ^ Stefanovitch, Nicolas; Alshamsi, Aamena; Cebrian, Manuel; Rahwan, Iyad (30 September 2014). "Error and attack tolerance of collective problem solving: The DARPA Shredder Challenge". EPJ Data Science. 3 (1) 13. doi:10.1140/epjds/s13688-014-0013-1. hdl:21.11116/0000-0002-D39F-D.
Further reading
[edit]- Beckmann, Jens F.; Guthke, Jürgen (1995). "Complex problem solving, intelligence, and learning ability". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 177–200.
- Brehmer, Berndt (1995). "Feedback delays in dynamic decision making". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 103–130.
- Brehmer, Berndt; Dörner, D. (1993). "Experiments with computer-simulated microworlds: Escaping both the narrow straits of the laboratory and the deep blue sea of the field study". Computers in Human Behavior. 9 (2–3): 171–184. doi:10.1016/0747-5632(93)90005-D.
- Dörner, D. (1992). "Über die Philosophie der Verwendung von Mikrowelten oder 'Computerszenarios' in der psychologischen Forschung" [On the proper use of microworlds or "computer scenarios" in psychological research]. In Gundlach, H. (ed.). Psychologische Forschung und Methode: Das Versprechen des Experiments. Festschrift für Werner Traxel (in German). Passau, Germany: Passavia-Universitäts-Verlag. pp. 53–87.
- Eyferth, K.; Schömann, M.; Widowski, D. (1986). "Der Umgang von Psychologen mit Komplexität" [On how psychologists deal with complexity]. Sprache & Kognition (in German). 5: 11–26.
- Funke, Joachim (1993). "Microworlds based on linear equation systems: A new approach to complex problem solving and experimental results" (PDF). In Strube, G.; Wender, K.-F. (eds.). The cognitive psychology of knowledge. Amsterdam: Elsevier Science Publishers. pp. 313–330.
- Funke, Joachim (1995). "Experimental research on complex problem solving" (PDF). In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 243–268.
- Funke, U. (1995). "Complex problem solving in personnel selection and training". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 219–240.
- Groner, M.; Groner, R.; Bischof, W. F. (1983). "Approaches to heuristics: A historical review". In Groner, R.; Groner, M.; Bischof, W. F. (eds.). Methods of heuristics. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 1–18.
- Hayes, J. (1980). The complete problem solver. Philadelphia: The Franklin Institute Press.
- Huber, O. (1995). "Complex problem solving as multistage decision making". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 151–173.
- Hübner, Ronald (1989). "Methoden zur Analyse und Konstruktion von Aufgaben zur kognitiven Steuerung dynamischer Systeme" [Methods for the analysis and construction of dynamic system control tasks] (PDF). Zeitschrift für Experimentelle und Angewandte Psychologie (in German). 36: 221–238.
- Hunt, Earl (1991). "Some comments on the study of complexity". In Sternberg, R. J.; Frensch, P. A. (eds.). Complex problem solving: Principles and mechanisms. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 383–395. ISBN 978-1-317-78386-2.
- Hussy, W. (1985). "Komplexes Problemlösen—Eine Sackgasse?" [Complex problem solving—a dead end?]. Zeitschrift für Experimentelle und Angewandte Psychologie (in German). 32: 55–77.
- Kluwe, R. H. (1993). "Chapter 19 Knowledge and Performance in Complex Problem Solving". The Cognitive Psychology of Knowledge. Advances in Psychology. Vol. 101. pp. 401–423. doi:10.1016/S0166-4115(08)62668-0. ISBN 978-0-444-89942-2.
- Kluwe, R. H. (1995). "Single case studies and models of complex problem solving". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 269–291.
- Kolb, S.; Petzing, F.; Stumpf, S. (1992). "Komplexes Problemlösen: Bestimmung der Problemlösegüte von Probanden mittels Verfahren des Operations Research—ein interdisziplinärer Ansatz" [Complex problem solving: determining the quality of human problem solving by operations research tools—an interdisciplinary approach]. Sprache & Kognition (in German). 11: 115–128.
- Krems, Josef F. (1995). "Cognitive flexibility and complex problem solving". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 201–218.
- Melzak, Z. (1983). Bypasses: A Simple Approach to Complexity. London, UK: Wiley.
- Müller, H. (1993). Komplexes Problemlösen: Reliabilität und Wissen [Complex problem solving: Reliability and knowledge] (in German). Bonn, Germany: Holos.
- Paradies, M.W.; Unger, L. W. (2000). TapRooT—The System for Root Cause Analysis, Problem Investigation, and Proactive Improvement. Knoxville, Tenn.: System Improvements.
- Putz-Osterloh, Wiebke (1993). "Chapter 15 Strategies for Knowledge Acquisition and Transfer of Knowledge in Dynamic Tasks". The Cognitive Psychology of Knowledge. Advances in Psychology. Vol. 101. pp. 331–350. doi:10.1016/S0166-4115(08)62664-3. ISBN 978-0-444-89942-2.
- Riefer, David M.; Batchelder, William H. (1988). "Multinomial modeling and the measurement of cognitive processes" (PDF). Psychological Review. 95 (3): 318–339. doi:10.1037/0033-295x.95.3.318. S2CID 14994393. Archived from the original (PDF) on 2018-11-25.
- Schaub, H. (1993). Modellierung der Handlungsorganisation (in German). Bern, Switzerland: Hans Huber.
- Strauß, B. (1993). Konfundierungen beim Komplexen Problemlösen. Zum Einfluß des Anteils der richtigen Lösungen (ArL) auf das Problemlöseverhalten in komplexen Situationen [Confoundations in complex problem solving. On the influence of the degree of correct solutions on problem solving in complex situations] (in German). Bonn, Germany: Holos.
- Strohschneider, S. (1991). "Kein System von Systemen! Kommentar zu dem Aufsatz 'Systemmerkmale als Determinanten des Umgangs mit dynamischen Systemen' von Joachim Funke" [No system of systems! Reply to the paper 'System features as determinants of behavior in dynamic task environments' by Joachim Funke]. Sprache & Kognition (in German). 10: 109–113.
- Tonelli, Marcello (2011). Unstructured Processes of Strategic Decision-Making. Saarbrücken, Germany: Lambert Academic Publishing. ISBN 978-3-8465-5598-9.
- Van Lehn, Kurt (1989). "Problem solving and cognitive skill acquisition". In Posner, M. I. (ed.). Foundations of cognitive science (PDF). Cambridge, Mass.: MIT Press. pp. 527–579.
- Wisconsin Educational Media Association (1993), Information literacy: A position paper on information problem-solving, WEMA Publications, vol. ED 376 817, Madison, Wis.
{{citation}}: CS1 maint: location missing publisher (link) (Portions adapted from Michigan State Board of Education's Position Paper on Information Processing Skills, 1992.)
External links
[edit]
Learning materials related to Solving Problems at Wikiversity
Problem solving
View on GrokipediaDefinitions and Foundations
Core Definition and Distinctions
Problem solving constitutes the cognitive processes by which individuals or systems direct efforts toward attaining a goal absent an immediately known solution method.[11] This entails recognizing a gap between the existing state and the target outcome, then deploying mental operations—such as trial-and-error, analogy, or systematic search—to mitigate that discrepancy and reach resolution.[4] Empirical studies in cognitive psychology underscore that effective problem solving hinges on representing the problem accurately in working memory, evaluating feasible actions, and iterating based on feedback from intermediate states.[12] A primary distinction within problem solving concerns the problem's structure: well-defined problems provide explicit initial conditions, unambiguous goals, and permissible operators, enabling algorithmic resolution, as exemplified by chess moves under fixed rules or arithmetic computations.[13] Ill-defined problems, conversely, feature incomplete specifications—such as vague objectives or undefined constraints—necessitating initial efforts to refine the problem formulation itself, common in domains like urban planning or scientific hypothesis testing where multiple viable interpretations exist.[14] This dichotomy influences solution efficacy, with well-defined cases often yielding faster, more reliable outcomes via forward search, while ill-defined ones demand heuristic strategies and creative restructuring to avoid fixation on suboptimal paths.[15] Problem solving further differentiates from routine procedures, which invoke pre-learned scripts or automated responses for familiar scenarios without necessitating novel cognition, such as habitual route navigation.[16] In contrast, genuine problem solving arises when routines falter, requiring adaptive reasoning to devise non-standard interventions. It also contrasts with decision making, the latter entailing evaluation and selection among extant options to optimize outcomes under constraints, whereas problem solving precedes this by generating or identifying viable alternatives to address root discrepancies.[18][19] These boundaries highlight problem solving's emphasis on causal intervention over mere choice, grounded in first-principles analysis of state transitions rather than probabilistic selection.[20]Psychological and Cognitive Perspectives
Psychological perspectives on problem solving emphasize mental processes over observable behaviors, viewing it as a cognitive activity involving representation, search, and transformation of problem states. In Gestalt psychology, Wolfgang Köhler's experiments with chimpanzees in the 1910s demonstrated insight, where solutions emerged suddenly through restructuring the perceptual field rather than trial-and-error. For instance, chimps stacked boxes to reach bananas, indicating cognitive reorganization beyond incremental learning.[21][22] The information-processing approach, advanced by Allen Newell and Herbert A. Simon in the 1950s, models problem solving as searching a problem space defined by initial states, goal states, and operators. Their General Problem Solver (GPS) program, implemented in 1959, used means-ends analysis to reduce differences between current and goal states via heuristic steps. This framework posits humans as symbol manipulators akin to computers, supported by protocols from tasks like the Tower of Hanoi.[23][24] Cognitive strategies distinguish algorithms, which guarantee solutions through exhaustive enumeration like breadth-first search, from heuristics, efficient shortcuts such as hill-climbing or analogy that risk suboptimal outcomes but save computational resources. Heuristics like availability bias influence real-world decisions, as evidenced in Tversky and Kahneman's 1974 studies on judgment under uncertainty. Functional fixedness, identified by Karl Duncker in 1945, exemplifies barriers where objects are perceived only in accustomed uses, impeding novel applications.[25][26] Graham Wallas's 1926 model outlines four stages: preparation (gathering information), incubation (unconscious processing), illumination (aha moment), and verification (testing the solution). Empirical support includes studies showing incubation aids insight after breaks from fixation, though mechanisms remain debated, with neural imaging suggesting default mode network activation during incubation. Mental sets, preconceived solution patterns, further constrain flexibility, as replicated in Einstellung effect experiments where familiar strategies block superior alternatives.[27][4][28]Computational and Logical Frameworks
In computational models of problem solving, problems are represented as searches through a state space, comprising initial states, goal states, operators for state transitions, and path costs.[29] This paradigm originated with Allen Newell, Herbert A. Simon, and J.C. Shaw's General Problem Solver (GPS) program, implemented in 1957 at RAND Corporation, which automated theorem proving by mimicking human means-ends analysis: it identified discrepancies between current and target states, selected operators to minimize differences, and recursively applied subgoals.[30] GPS's success in solving logic puzzles and proofs validated computational simulation of cognition, though limited by exponential search complexity in large spaces.[31] Uninformed search algorithms systematically explore state spaces without goal-specific guidance; breadth-first search (BFS) expands nodes level by level, ensuring shortest-path optimality for uniform costs but requiring significant memory, while depth-first search (DFS) prioritizes depth via stack-based recursion, conserving memory at the risk of incomplete exploration in infinite spaces.[32] Informed methods enhance efficiency with heuristics; the A* algorithm, formulated in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael, evaluates nodes by f(n) = g(n) + h(n), where g(n) is path cost from start and h(n) is admissible heuristic estimate to goal, guaranteeing optimality if h(n) never overestimates.[32] These techniques underpin AI planning and optimization, scaling via pruning and approximations for real-world applications like route finding.[32] Logical frameworks formalize problem solving through deductive inference in symbolic systems, encoding knowledge in propositional or first-order logic and deriving solutions via sound proof procedures.[33] Automated reasoning tools apply resolution or tableaux methods to check satisfiability or entailment; for instance, SAT solvers like MiniSat, evolving from Davis-Putnam-Logemann-Loveland procedure (1962), efficiently decide propositional formulas under NP-completeness by clause learning and unit propagation.[33] Constraint satisfaction problems (CSPs) model combinatorial tasks—such as scheduling or map coloring—as variable domains with binary or global constraints, solved by backtracking search augmented with arc consistency to prune inconsistent partial assignments.[34] Logic programming paradigms, exemplified by Prolog (developed 1972 by Alain Colmerauer), declare problems as Horn clauses—facts and rules—enabling declarative solving via SLD-resolution and backward chaining, where queries unify with knowledge bases to generate proofs as computations.[35] Prolog's built-in search handles puzzles like the eight queens by implicit depth-first traversal with automatic backtracking on failures, though practical limits arise from left-recursion and lack of tabling without extensions.[36] These frameworks prioritize completeness and soundness, contrasting heuristic searches, but demand precise formalization to avoid undecidability in expressive logics.[33]Engineering and Practical Applications
In engineering, problem solving employs structured methodologies to address technical challenges, often integrating analytical, numerical, and experimental techniques to derive verifiable solutions. Analytical methods involve deriving exact solutions through mathematical modeling, such as solving differential equations for structural stress analysis. Numerical methods approximate solutions via computational algorithms, like finite element analysis used in simulating fluid dynamics or heat transfer in mechanical systems. Experimental methods validate models through physical testing, ensuring alignment with real-world conditions, as seen in prototyping phases where iterative trials refine designs based on empirical data.[37] The engineering design process formalizes problem solving as an iterative cycle: defining the problem with clear objectives and constraints, researching background data, generating solution concepts, prototyping, testing under controlled conditions, and evaluating outcomes to optimize or redesign. This approach, rooted in causal analysis of failure modes, minimizes risks in applications like aerospace component development, where failure probabilities must be quantified below 10^{-9} per flight hour. For instance, NASA's use of this process in the Space Launch System addressed propulsion inefficiencies by iterating through over 1,000 test firings since 2015, achieving thrust levels exceeding 2 million pounds.[38] In industrial settings, systematic problem solving enhances operational efficiency through tools like root cause analysis (RCA) and the 8 Disciplines (8D) method, which dissect issues via data-driven fishbone diagrams and Pareto charts to isolate dominant causes. Manufacturers apply these in lean production, reducing defect rates by up to 90% in automotive assembly lines; Toyota's implementation since the 1950s has sustained kaizen improvements, correlating with annual quality gains of 20-30% in supplier networks. Similarly, PDCA (Plan-Do-Check-Act) cycles support continuous refinement in chemical processing, where Six Sigma deployments have cut variability in yield processes from 3-6 sigma levels, yielding cost savings exceeding $1 billion annually across Fortune 500 firms by 2020. These methods prioritize empirical validation over assumption, countering biases in anecdotal reporting by mandating statistical significance in conclusions.[39][40]Evolutionary and Biological Underpinnings
Problem-solving abilities in animals demonstrate evolutionary adaptations to environmental challenges, with evidence of innovation and tool use appearing across taxa such as primates, corvids, and cetaceans, suggesting convergent evolution of cognitive flexibility for novel problem resolution.[41] In primates, these capacities likely arose in response to socio-ecological pressures, including foraging complexities and social navigation, fostering proto-forms of planning and causal inference that prefigure human cognition.[42] Ontogenetic development influences these traits, where genetic and experiential factors during growth modulate problem-solving proficiency, as observed in comparative studies of avian and mammalian species.[43] Biologically, the prefrontal cortex (PFC) serves as a core neural substrate for problem-solving, enabling executive functions such as working memory, inhibitory control, and the dynamic simulation of action-outcome sequences essential for goal-directed behavior.[44] Neuroimaging and lesion studies confirm PFC activation during tasks requiring hypothesis testing and credit assignment, where it integrates sensory inputs with predictive modeling to evaluate potential solutions.[45] In humans, PFC maturation extends into adolescence, correlating with improvements in abstract reasoning and risk assessment, underscoring its role in transitioning from impulsive to strategic problem resolution.[46] Genetic factors contribute to individual variation in problem-solving efficacy, with heritability estimates for related cognitive traits like intelligence reaching 50-80% in twin studies.[47] Polymorphisms in the catechol-O-methyltransferase (COMT) gene, which regulates dopamine levels in the PFC, influence insight-based problem-solving, where the Val/Val genotype associates with enhanced performance on tasks demanding rapid neural signaling over sustained flexibility.[48] Comparative genomics reveal conserved mechanisms, such as dopamine receptor gene expression (e.g., DRD4), linking problem-solving divergence in birds to mammalian analogs, implying deep evolutionary roots in neurochemical modulation of cognitive adaptability.[49]Historical Evolution
Pre-20th Century Insights
Early insights into problem solving emerged in ancient philosophy, particularly through dialectical methods that emphasized questioning and logical deduction to resolve intellectual puzzles. In ancient Greece around 400 BCE, Socrates developed the elenchus, a technique of probing interrogation to expose contradictions in beliefs and guide interlocutors toward clearer understanding, effectively framing problem resolution as a collaborative uncovering of truth via sustained dialogue.[50] This approach prioritized self-examination over rote acceptance, influencing subsequent views on reasoning as iterative refinement rather than abrupt revelation.[51] Aristotle, in the 4th century BCE, advanced deductive logic in works like the Organon, introducing syllogisms as formal structures for deriving conclusions from premises, enabling systematic evaluation of arguments and solutions to definitional or classificatory problems.[52] His framework classified reasoning into demonstrative (for scientific knowledge) and dialectical forms, underscoring logic's role in dissecting complex issues into verifiable components, though limited to categorical propositions without modern quantifiers.[53] This syllogistic method dominated Western thought for over two millennia, providing tools for problem solving in ethics, physics, and biology by ensuring inferences aligned with observed realities.[52] In Hellenistic mathematics circa 300 BCE, Euclid's Elements exemplified axiomatic deduction, starting from unproven postulates—such as "a straight line can be drawn between any two points"—to prove theorems through rigorous chains of implication, solving geometric construction problems like duplicating a cube via logical progression rather than empirical trial.[54] This method treated problems as derivable from foundational assumptions, minimizing ambiguity and fostering certainty in spatial reasoning, though it assumed Euclidean space without addressing non-Euclidean alternatives.[55] René Descartes, in his 1637 Discourse on the Method, outlined a prescriptive approach with four rules: accept only clear and distinct ideas, divide problems into smallest parts, synthesize from simple to complex, and review comprehensively to avoid omissions.[56] Applied in his analytic geometry, this reduced multifaceted issues—like trajectory calculations—to algebraic manipulations, bridging philosophy and science by emphasizing methodical skepticism and decomposition over intuition alone.[57] Descartes' emphasis on order and enumeration anticipated modern algorithmic thinking, though critiqued for over-relying on introspection amid empirical gaps.[58]
Gestalt and Early 20th-Century Theories
Gestalt psychology, originating in the early 20th century with Max Wertheimer's 1912 work on apparent motion, applied holistic principles to cognition, arguing that problem solving requires perceiving the entire structural configuration of a problem rather than assembling solutions from isolated elements.[59] This approach rejected the associationist and behaviorist emphasis on trial-and-error learning, positing instead that effective solutions arise from restructuring the problem representation to reveal inherent relations.[7] Key figures including Wertheimer, Wolfgang Köhler, and Kurt Koffka maintained that thinking involves dynamic reorganization of the perceptual field, enabling insight (Einsicht), a sudden "aha" moment where the solution becomes evident as part of the whole.[60] Wolfgang Köhler's experiments with chimpanzees on Tenerife from 1913 to 1917 provided empirical support for insight in problem solving. In tasks requiring tool use or environmental manipulation, such as stacking boxes to reach suspended bananas or joining bamboo sticks to retrieve food, apes like Sultan initially failed through random attempts but succeeded abruptly after a pause, indicating perceptual reorganization rather than reinforced associations.[61] Köhler documented these in The Mentality of Apes (1921), distinguishing insightful behavior—apprehending means-ends relations—from mechanical trial-and-error, challenging strict behaviorism by demonstrating proto-intelligence in non-human primates.[62] These findings underscored that problem solving depends on grasping the problem's gestalt, not incremental conditioning.[63] Max Wertheimer further developed these ideas, contrasting productive thinking—which uncovers novel structural insights—with reproductive thinking reliant on memorized routines. In analyses of mathematical proofs and everyday puzzles, he showed how fixation on superficial features blocks solutions, resolvable only by reformulating the problem to align with its essential form.[7] Though formalized in Productive Thinking (1945), Wertheimer's lectures from the 1920s influenced early Gestalt applications, emphasizing education's role in fostering holistic apprehension over rote methods.[64] Early 20th-century theories thus shifted focus from associative chains, as in Edward Thorndike's 1905 law of effect, to causal, perceptual dynamics in cognition.[65]Information-Processing Paradigm (1950s-1980s)
The information-processing paradigm in problem solving arose during the 1950s as cognitive psychology shifted from behaviorist stimulus-response models to viewing the mind as a symbol-manipulating system analogous to early digital computers. This approach posited that human cognition involves encoding environmental inputs, storing representations in memory, applying rule-based transformations, and evaluating outputs against goals, much like algorithmic processing in machines. Pioneered amid advances in computer science and cybernetics, it emphasized internal mental operations over observable behaviors, drawing on empirical studies of human performance on logic puzzles and games.[66][67] Central to the paradigm was the work of Allen Newell and Herbert A. Simon, who in 1957–1959 developed the General Problem Solver (GPS), one of the first AI programs explicitly designed to simulate human-like reasoning. GPS operated within a "problem space" framework, representing problems as a set of possible states (nodes), transitions via operators (actions that alter states), an initial state, and a goal state. It employed means-ends analysis, a heuristic strategy that identifies the discrepancy between the current state and the goal, then selects operators to minimize that gap, often by setting subgoals. Implemented on the JOHNNIAC computer at RAND Corporation, GPS successfully solved tasks like the Tower of Hanoi puzzle and logical theorems, demonstrating that rule-based search could replicate observed human protocols from think-aloud experiments. Newell, Simon, and J.C. Shaw's 1959 report detailed GPS's architecture, highlighting its reliance on heuristic rather than exhaustive search to manage computational complexity.[68][24] By the 1960s and 1970s, the paradigm expanded through Newell and Simon's empirical investigations, formalized in their 1972 book Human Problem Solving, which analyzed over 10,000 moves from chess masters and thousands of steps in puzzle-solving protocols. They proposed the heuristic search hypothesis: problem solvers construct and navigate internal representations via selective exploration guided by evaluations of promising paths, bounded by cognitive limits like working memory capacity (around 7±2 chunks, per related information theory). This era's models influenced AI developments, such as production systems, and cognitive theories positing that intelligence stems from physical symbol systems capable of indefinite information manipulation. Simon's concept of bounded rationality—decision-making under constraints of incomplete information and finite computation—integrated economic realism into the framework, explaining why humans favor satisficing over optimal solutions in complex environments. The paradigm's dominance persisted into the 1980s, underpinning lab-based studies of well-structured problems, though its computer metaphor faced scrutiny for overlooking holistic or intuitive elements evident in real-world cognition.[24][69]Post-2000 Developments and Critiques
Since the early 2000s, research on problem solving has shifted toward complex problem solving (CPS), defined as the self-regulated psychological processes required to achieve goals in dynamic, interconnected environments with incomplete information.[70] This framework, gaining prominence in European cognitive psychology around the turn of the century, distinguishes CPS from traditional well-structured puzzles by emphasizing adaptation to evolving conditions, knowledge acquisition about system dynamics, and handling of uncertainty. Empirical studies, such as those using microworld simulations, have shown CPS correlates with fluid intelligence but requires domain-specific exploration and reduction of complexity through mental models.[71] Parallel developments include the formal assessment of collaborative problem solving (ColPS), integrated into the OECD's Programme for International Student Assessment (PISA) in 2015, which evaluated 15-year-olds' abilities to share information, negotiate roles, and manage conflicts in virtual team scenarios across 29 countries.[72] High-performing systems, like those in Estonia and Japan, demonstrated superior communication and collective knowledge construction, highlighting ColPS as a 21st-century competency distinct from individual reasoning.[73] In computational domains, AI milestones such as DeepMind's AlphaGo in 2016 advanced problem solving through deep reinforcement learning, enabling superhuman performance in Go by self-play and value network approximations, influencing hybrid human-AI models. Subsequent systems like AlphaProof (2024) achieved silver-medal level on International Mathematical Olympiad problems, blending neural networks with formal theorem provers for novel proofs.[74] Critiques of earlier information-processing models, such as those by Newell and Simon, intensified post-2000, arguing their protocol analysis and strategy identification methods failed to aggregate data systematically or uncover general heuristics applicable beyond lab tasks.[75] Linear, equation-like approaches overlook real-world nonlinearity and emergence, rendering them impractical for ill-defined problems where feedback loops and values shape outcomes.[76] The rise of embodied cognition challenged disembodied symbol manipulation, with experiments showing bodily actions—like gestures or motor simulations—facilitate insight and representation shifts in tasks such as mental rotation or analogy formation.[77] These perspectives underscore limitations in classical models' neglect of situated, enactive processes, advocating integration of dual-process theories with attention and environmental constraints for more robust accounts.[78]Core Processes and Models
General Stage-Based Models
Stage-based models of problem solving conceptualize the process as progressing through a series of discrete, often sequential phases, emphasizing structured cognition over unstructured trial-and-error. These models, rooted in early 20th-century psychological and mathematical theories, posit that effective problem resolution requires deliberate movement from problem apprehension to solution verification, with potential for iteration if initial attempts fail. Empirical support for such staging derives from observational studies of human solvers, where transitions between phases correlate with reduced cognitive load and higher success rates in controlled tasks.[79] A foundational example is George Pólya's four-step framework, introduced in his 1945 treatise How to Solve It, which applies broadly beyond mathematics to any well-defined problem. The first step, "understand the problem," entails identifying givens, unknowns, and constraints through restatement and visualization. The second, "devise a plan," involves selecting heuristics such as drawing diagrams, seeking analogies, or reversing operations. Execution in the third step applies the plan systematically, while the fourth, "look back," evaluates the outcome for correctness, generality, and alternative approaches. This model's efficacy has been validated in educational settings, where training on its stages improves student performance by 20-30% in standardized problem sets.[80][81] For creative or insight-driven problems, Graham Wallas's 1926 model delineates four phases: preparation (acquiring relevant knowledge), incubation (subconscious rumination), illumination (sudden insight), and verification (rational testing). Neuroimaging studies corroborate this sequence, showing shifts from prefrontal activation in preparation to temporal lobe engagement during incubation-like breaks, with illumination linked to gamma-band neural synchrony. Unlike linear models, Wallas's accommodates non-monotonic progress, explaining breakthroughs in domains like scientific discovery where explicit planning stalls.[6] Allen Newell and Herbert Simon's information-processing paradigm, developed in the 1950s and formalized in their 1972 work, frames stages around a "problem space": initial state appraisal, goal-state definition, operator selection for state transformation, and heuristic search to bridge gaps via means-ends analysis. This computational model, tested through protocols analyzing think-aloud data from puzzle solvers, reveals that experts traverse fewer states by chunking representations, achieving solutions 5-10 times faster than novices. Its stages underscore causal mechanisms like reduced working memory demands through hierarchical planning.[82][83] Contemporary adaptations, such as those in quality management, extend these to practical cycles: problem definition, root-cause diagnosis via tools like fishbone diagrams, solution generation and implementation, and monitoring for sustainability. Field trials in manufacturing report 15-25% defect reductions when stages are enforced, attributing gains to explicit causal mapping over intuitive leaps. Critics note that rigid staging may overlook domain-specific nonlinearities, as evidenced by protocol analyses where 40% of solvers revisit early phases post-execution.[39]Trial-and-Error vs. Systematic Approaches
Trial-and-error approaches to problem solving involve iteratively testing potential solutions without a predefined structure, relying on feedback from successes and failures to refine actions until a viable outcome emerges. This method, foundational in behavioral psychology, was empirically demonstrated in Edward Thorndike's 1898 experiments using puzzle boxes, where cats escaped enclosures through repeated, incremental trials, gradually associating specific lever pulls or steps with release via the law of effect—strengthening responses that led to rewards.[84][85] Such processes are adaptive in unstructured environments, as evidenced by computational models showing deterministic strategies emerging in human trial-and-error learning tasks, where participants shift from random exploration to patterned responses after initial errors.[86] In contrast, systematic approaches employ algorithms—rigid, step-by-step procedures that exhaustively enumerate possibilities to guarantee a correct solution if one exists, such as backward chaining in logic puzzles or divide-and-conquer in computational problems.[87][88] These methods prioritize completeness over speed, deriving from formal systems like mathematics, where, for instance, the Euclidean algorithm for greatest common divisors systematically reduces inputs until termination, avoiding redundant trials.[89] Trial-and-error excels in ill-defined or novel problems with unknown parameters, enabling discovery through experiential accumulation, but incurs high costs in time and resources for large search spaces, often yielding suboptimal solutions due to incomplete exploration.[87] Systematic methods mitigate these inefficiencies by ensuring optimality and reproducibility in well-defined domains, yet prove impractical for computationally intractable problems, as exponential growth in possibilities overwhelms human or even machine capacity without heuristics.[88] Empirical contrasts in learning tasks reveal trial-and-error's utility in flexible tool use via mental simulation, accelerating adaptation beyond pure randomness, while systematic strategies dominate in verifiable contexts like theorem proving, where error rates drop with procedural adherence.[90] Hybrid applications, blending initial trial phases with algorithmic refinement, often maximize efficiency across cognitive studies.[86]Role of Insight and Representation Changes
Insight in problem solving refers to the sudden emergence of a solution following an impasse, often characterized by an "aha" experience where the problem solver perceives novel connections or relationships among elements previously overlooked.[91] This phenomenon, distinct from incremental trial-and-error approaches, involves a qualitative shift in cognitive processing rather than mere accumulation of information.[92] Gestalt psychologists, such as Wolfgang Köhler and Max Wertheimer, pioneered the study of insight through chimpanzee experiments and human puzzles in the early 20th century, demonstrating that solutions arise from perceptual reorganization rather than associative reinforcement.[93] In Köhler's 1925 observations of Sultan the chimpanzee stacking boxes to reach bananas, the insight manifested as an abrupt reconfiguration of available objects into a functional whole, bypassing exhaustive search.[7] Central to insight is the mechanism of representation change, whereby the solver alters the mental model of the problem, enabling previously inapplicable operators or actions to become viable. Stellan Ohlsson's Representational Change Theory (RCT), developed in the 1980s and refined in subsequent works, posits that initial representations impose constraints—such as selective attention to dominant features or implicit assumptions—that block progress, leading to fixation.[94] Overcoming this requires processes like constraint relaxation (loosening unhelpful assumptions) or re-encoding (reinterpreting elements in a new frame), which redistribute activation across the problem space and reveal hidden affordances.[95] For instance, in Karl Duncker's 1945 candle problem, participants fixate on tacks as fasteners rather than potential candles, but insight emerges upon representing the box as a platform, a shift validated in empirical studies showing reduced solution times after hints prompting such reframing.[60] Empirical support for representation changes comes from behavioral paradigms distinguishing insight from analytic problems; in insight tasks like the nine-dot puzzle, solvers exhibit longer impasses followed by rapid correct responses upon restructuring (e.g., extending lines beyond the perceived boundary), with eye-tracking data revealing shifts from constrained to expansive visual exploration.[96] Neuroscientific evidence further corroborates this: functional MRI studies indicate heightened activity in the right anterior superior temporal gyrus during insight moments, associated with semantic integration and gist detection, alongside pre-insight alpha-band desynchronization signaling weakened top-down constraints.[91] These findings align with causal models where impasse fosters diffuse processing, allowing low-activation representations to surface, though individual differences in working memory capacity modulate susceptibility to fixation, with higher-capacity individuals more prone to initial entrenchment but equally capable of breakthroughs.[97] Critiques of insight-centric models highlight that not all breakthroughs feel sudden; gradual representation shifts can precede the "aha," as evidenced by think-aloud protocols showing incremental constraint loosening in compound remote associates tasks.[98] Nonetheless, representation changes remain pivotal, explaining why training in perspective-taking or analogy use—techniques that prompt reframing—enhances insight rates by 20-30% in controlled experiments, underscoring their practical utility beyond serendipity.[99] This process contrasts with algorithmic methods by emphasizing non-monotonic leaps, where discarding prior schemas yields adaptive novelty in ill-structured domains like scientific discovery.[100]Strategies and Techniques
Heuristic and Analogical Methods
Heuristics represent practical, experience-based strategies that enable individuals to navigate complex problems efficiently by approximating solutions rather than pursuing exhaustive analysis. These mental shortcuts, rooted in bounded rationality as conceptualized by Herbert Simon in the 1950s, prioritize speed and cognitive economy over guaranteed optimality, often succeeding in uncertain environments where full information is unavailable.[101] In problem-solving contexts, heuristics guide actions such as reducing the problem to simpler subproblems or evaluating progress toward a goal, as seen in means-ends analysis where differences between current and desired states are iteratively minimized.[102] Empirical studies demonstrate their efficacy; for instance, in mathematical tasks, applying heuristics like working backwards from the solution or identifying invariants has been shown to increase success rates by directing attention to relevant features.[103] George Pólya formalized heuristics for mathematical problem solving in his 1945 book How to Solve It, advocating a structured approach: first, comprehend the problem's conditions and goals; second, devise a plan using tactics such as analogy, pattern recognition, or decomposition; third, execute the plan; and fourth, reflect on the solution for generalization.[104] Specific heuristics include seeking auxiliary problems to illuminate the original, exploiting symmetry, or adopting a forward or backward perspective, which collectively reduce computational demands while fostering insight. These methods, validated through decades of application in education and engineering, underscore heuristics' role in overcoming fixation on initial representations, though they risk errors if misapplied, as evidenced by systematic deviations in probabilistic judgments.[105][106] Analogical methods complement heuristics by transferring knowledge from a familiar source domain to the novel target problem, leveraging structural similarities to generate solutions. This process involves detecting correspondences between relational systems, as opposed to mere object matches, allowing solvers to adapt proven strategies to new contexts. Dedre Gentner's structure-mapping theory, developed in the 1980s, formalizes this as an alignment of relational predicates—such as causal chains or hierarchies—projected from source to target, with empirical tests showing superior performance in tasks like Duncker's tumor problem when surface dissimilarities are minimized to highlight deep alignments.[107] For example, solving a radiation dosage puzzle by analogizing to a military siege tactic succeeded in laboratory settings when participants were prompted to map convergence principles, yielding transfer rates up to 80% under guided conditions.[108][109] Challenges in analogical reasoning include spontaneous retrieval failures, where solvers overlook accessible analogs without explicit cues, as documented in studies where only 20-30% of participants transferred unprompted from base to target problems.[110] Nonetheless, training in relational mapping enhances adaptability across domains, from scientific innovation—such as Rutherford's atomic model drawing on planetary orbits—to everyday troubleshooting, where causal realism demands verifying mapped inferences against empirical outcomes to avoid superficial traps. Integration of heuristics and analogies often amplifies effectiveness; Pólya explicitly recommended analogy as a planning heuristic, combining rapid approximation with structured transfer for robust problem resolution.[111][104]Algorithmic and Optimization Techniques
Algorithmic techniques in problem solving encompass systematic, rule-based procedures designed to yield exact solutions for well-defined, computable problems, often contrasting with heuristic methods by guaranteeing correctness and completeness when a solution exists. These approaches rely on formal representations of the problem space, such as graphs or state transitions, and leverage computational efficiency to navigate search spaces. In practice, they are applied in domains like scheduling, routing, and resource allocation, where input constraints and objectives can be precisely modeled.[112][113] Key paradigms include divide-and-conquer, which recursively partitions a problem into independent subproblems, solves each, and merges results; this reduces complexity from exponential to polynomial time in cases like merge sort or fast Fourier transforms. Greedy algorithms make locally optimal choices at each step, yielding global optima for problems like minimum spanning trees via Kruskal's algorithm (1956), though they fail when substructure does not permit it. Backtracking systematically explores candidate solutions by incrementally building and abandoning partial ones that violate constraints, effective for puzzles like the N-Queens problem, with pruning via bounding to mitigate combinatorial explosion.[114][115] Dynamic programming, formalized by Richard Bellman in 1953 while at RAND Corporation, tackles sequential decision problems exhibiting optimal substructure and overlapping subproblems. It computes solutions bottom-up or top-down with memoization, storing intermediate results in a table to avoid redundant calculations; for instance, the Fibonacci sequence computation drops from O(2^n) to O(n) time. Bellman coined the term to mask its mathematical focus from non-technical sponsors, drawing from multistage decision processes in economics and control theory. Empirical benchmarks show it outperforms naive recursion by orders of magnitude in knapsack or shortest-path problems like Floyd-Warshall (1962).[116][117] Optimization techniques extend algorithmic methods to select the best solution among feasible ones, often under constraints like linearity or convexity. The simplex method, invented by George Dantzig in 1947 for U.S. Air Force logistics planning, iteratively pivots along edges of the polyhedral feasible region in linear programming, converging to an optimal vertex in polynomial average-case time despite worst-case exponential bounds. It solved real-world problems like diet formulation (Stigler, 1945) and transportation (Koopmans, 1949), with variants handling degeneracy via Bland's rule (1977). For nonlinear cases, gradient-based methods like steepest descent (Cauchy, 1847; modernized in optimization) follow local derivatives, but require convexity for global optimality, as non-convex landscapes can trap solutions in local minima—evidenced by failure rates in high-dimensional training of neural networks exceeding 20% without regularization.[118][119][120]| Technique | Key Principle | Example Application | Time Complexity (Typical) | Citation |
|---|---|---|---|---|
| Divide-and-Conquer | Recursive partitioning | Merge sort | O(n log n) | [114] |
| Dynamic Programming | Subproblem memoization | 0/1 Knapsack | O(nW) where W is capacity | [116] |
| Simplex Method | Vertex pivoting | Linear resource allocation | Polynomial (average) | [118] |
| Greedy | Local optima selection | Huffman coding | O(n log n) | [115] |
Creative and Divergent Thinking Strategies
Divergent thinking in problem solving involves generating a wide array of potential solutions by exploring diverse possibilities, contrasting with convergent thinking that narrows options to the optimal choice. This process, first formalized by psychologist J.P. Guilford in his 1967 work on the structure of intellect, emphasizes fluency, flexibility, and originality in idea production to overcome functional fixedness and habitual responses.[122] Empirical studies link higher divergent thinking capacity to improved creative problem-solving outcomes, as measured by tasks requiring novel combinations of information.[123] One prominent strategy is brainstorming, developed by advertising executive Alex Osborn in his 1953 book Applied Imagination. It encourages groups to produce as many ideas as possible without immediate criticism, aiming to leverage collective creativity through rules like deferring judgment and seeking wild ideas. However, meta-analyses reveal that interactive group brainstorming often yields fewer unique ideas per person than individuals working separately, due to production blocking—where participants wait to speak—and social loafing.[124][125] Nominal group techniques, combining individual ideation followed by group discussion, mitigate these issues and show superior results in controlled experiments.[126] Lateral thinking techniques, coined by Edward de Bono in his 1970 book Lateral Thinking, promote indirect approaches to disrupt linear reasoning, such as challenging assumptions or using provocation to generate alternatives. A key application is the Six Thinking Hats method (1985), where participants adopt sequential perspectives—white for facts, red for emotions, black for risks, yellow for benefits, green for creativity, and blue for process control—to systematically explore problems. Experimental evidence indicates this structured divergence enhances fluency in idea generation and group decision-making, outperforming unstructured discussions in undergraduate settings, though long-term transfer to real-world solving requires further validation.[127][128][129] Additional divergent strategies include problem reversal, which involves flipping the problem statement to reveal hidden assumptions, and random input methods, where unrelated stimuli prompt novel associations. These align with Guilford's divergent production factors and have been integrated into creative problem-solving frameworks like Osborn-Parnes, showing modest gains in divergent output in educational interventions.[3] Overall, while these strategies foster idea multiplicity, their efficacy depends on context, with individual practice often equaling or surpassing group efforts absent facilitation to counter cognitive inhibitions.[130]Barriers and Limitations
Individual Cognitive Barriers
![Noun_Brain_Nithinan_2452319.svg.png][float-right] Individual cognitive barriers encompass inherent limitations and biases in human cognition that impede effective problem solving, often stemming from constrained mental resources or habitual thought patterns. These barriers include mental sets, functional fixedness, limitations in working memory capacity, and various cognitive biases that distort perception and judgment. Empirical studies demonstrate that such obstacles can reduce problem-solving efficiency, particularly in novel or complex scenarios, by constraining the exploration of alternative solutions.[131][4] Mental set refers to the tendency to persist with familiar strategies or approaches that have succeeded in past problems, even when they are inappropriate for the current task. This rigidity prevents recognition of more suitable methods, as evidenced in experiments where participants repeatedly apply ineffective trial-and-error tactics to puzzles requiring insight. For instance, in the water jug problem, solvers fixated on addition or subtraction of measured amounts despite needing a different combination, leading to prolonged solution times.[132][133] Functional fixedness manifests as the inability to perceive objects or tools beyond their conventional uses, thereby limiting creative applications in problem solving. Classic demonstrations, such as Duncker's candle problem, show participants struggling to use a box as a platform because they view it primarily as a container for matches. This barrier arises from perceptual categorization that inhibits novel reconceptualization, with studies confirming its impact on insight-dependent tasks.[134][135] Working memory capacity, typically limited to holding and manipulating about four to seven chunks of information simultaneously, constrains the integration of multiple elements in complex problems. Research indicates that individuals with lower working memory capacity exhibit reduced performance in tasks requiring simultaneous tracking of variables, such as mathematical word problems or dynamic decision-making scenarios. This limitation exacerbates errors in dynamic environments where overloading working memory leads to incomplete representations of the problem space.[131][136][137] Cognitive biases further compound these barriers by systematically skewing evaluation of evidence and options. Confirmation bias, for example, drives individuals to favor information aligning with preconceptions, ignoring disconfirming data crucial for accurate problem diagnosis. Anchoring bias causes overreliance on initial information, distorting subsequent judgments in estimation or planning tasks. Empirical reviews of decision-making in uncertain contexts highlight how these biases, including overconfidence, contribute to persistent errors in professional and everyday problem solving.[138][139][140]Perceptual and Environmental Constraints
Functional fixedness represents a key perceptual constraint, wherein individuals fixate on the conventional uses of objects, impeding recognition of alternative applications essential for problem resolution. In Karl Duncker's 1945 experiment, participants received a candle, matches, and a box of thumbtacks with the task of affixing the candle to a wall to prevent wax drippage; success required inverting the thumbtack box as a candle platform, yet only about 30% succeeded initially due to perceiving the box solely as a container rather than a structural element.[141] This bias persists across contexts, as evidenced by subsequent replications showing similar failure rates without hints to reframe object utility.[142] Mental sets and unnecessary constraints further limit perception by imposing preconceived solution paths or self-generated restrictions not inherent to the problem. For instance, solvers often overlook viable options by rigidly adhering to prior successful strategies, a phenomenon termed the Einstellung effect, where familiar algorithms block novel insights. Empirical studies confirm that such sets reduce solution rates in insight problems by constraining problem representation, with participants solving fewer than 20% of tasks under entrenched mental frameworks compared to neutral conditions.[143] Perceptual stereotyping exacerbates this, as preconceptions about problem elements—such as labeling components by default functions—hinder isolation of core issues, leading to incomplete formulations.[144] Environmental factors impose external barriers that interact with perceptual limits, altering cognitive processing and solution efficacy. Time pressure diminishes performance in insight-oriented tasks by curtailing exploratory thinking; in remote associates tests, pressured participants generated 25-40% fewer valid solutions than those without deadlines, favoring heuristic shortcuts over thorough analysis.[145] Ambient noise levels modulate creativity nonlinearly: silence or excessive noise (above 85 dB) impairs divergent thinking, whereas moderate noise (approximately 70 dB) boosts abstract processing and idea generation by 15-20% in tasks like product ideation, as it promotes defocused attention without overwhelming sensory input.[146] Physical surroundings, including resource scarcity or cluttered spaces, compound these effects; experiments demonstrate that limited tools or distractions reduce problem-solving accuracy by increasing cognitive load, with error rates rising up to 30% in constrained setups versus optimized ones.[147] These constraints highlight how external conditions can rigidify perceptual biases, necessitating deliberate environmental adjustments for enhanced solvability.Social and Ideological Obstacles
Social pressures, such as conformity, can impede effective problem solving by compelling individuals to align with group consensus despite evident errors. In Solomon Asch's 1951 experiments, participants faced a simple perceptual task of matching line lengths but conformed to the incorrect judgments of confederates in approximately one-third of trials, even when the correct answer was obvious, demonstrating how normative influence suppresses independent analysis and distorts judgment under social observation.[148] This conformity extends to collective settings, where fear of ostracism discourages dissent and fosters acceptance of suboptimal solutions. Groupthink represents another social barrier, characterized by cohesive groups prioritizing harmony over critical evaluation, leading to flawed decision-making processes. Empirical reviews of Irving Janis's groupthink theory, spanning historical case analyses and laboratory studies, confirm its role in producing defective problem solving through symptoms like illusion of unanimity, self-censorship of doubts, and stereotyping of outsiders, as observed in events such as the Bay of Pigs invasion where suppressed alternatives contributed to strategic failure.[149] Such dynamics reduce the exploration of viable options, amplifying errors in high-stakes group deliberations. Ideological obstacles arise when entrenched beliefs constrain the consideration of evidence contradicting prior commitments, often manifesting as motivated reasoning that prioritizes worldview preservation over objective analysis. In academic fields like social psychology, political homogeneity—evidenced by surveys showing Democrat-to-Republican ratios exceeding 14:1 among faculty—fosters conformity to dominant progressive ideologies, biasing research questions, methodologies, and interpretations while marginalizing dissenting hypotheses.[150] This lack of viewpoint diversity empirically hampers creativity and discovery, as diverse perspectives enhance problem-solving rigor by challenging assumptions and mitigating confirmation biases inherent to ideological echo chambers.[151][152]Strategies for Mitigation
Strategies to mitigate individual cognitive barriers, such as confirmation bias and functional fixedness, emphasize awareness and structured techniques. Actively seeking disconfirming evidence counters confirmation bias by prompting individuals to evaluate alternative hypotheses rather than selectively interpreting data to support preconceptions.[153] Critical thinking training, including mindfulness practices, enhances metacognition, enabling recognition of biased reasoning patterns during problem formulation and evaluation.[154] For functional fixedness, reframing problems through "beyond-frame search"—explicitly considering uses of objects or concepts outside their conventional roles—increases solution rates in constrained tasks, as demonstrated in experimental studies where participants generated novel applications after prompted divergence.[155] Perceptual and environmental constraints can be addressed by optimizing external cues and iterative testing. Simplifying problem representations, such as breaking complex tasks into modular components, reduces fixation on initial framings and facilitates alternative pathways.[134] Environmental adjustments, like minimizing distractions through dedicated workspaces or timed reflection periods, preserve cognitive resources for insight generation, with evidence from productivity studies showing improved focus and error reduction.[156] Checklists and algorithmic protocols enforce systematic review, overriding heuristic shortcuts in high-stakes domains like engineering and medicine.[157] Social and ideological obstacles require mechanisms to introduce viewpoint diversity and empirical scrutiny. Forming heterogeneous teams mitigates groupthink by incorporating dissenting opinions, as randomized group compositions in decision experiments yield more robust solutions than homogeneous ones.[158] Assigning roles like devil's advocate systematically challenges ideological assumptions, fostering causal analysis over consensus-driven narratives.[159] Institutional practices, such as pre-registration of hypotheses in research to prevent selective reporting, counteract ideological filtering of evidence, with meta-analyses confirming reduced bias in outcomes.[160]- Training interventions: Longitudinal programs in debiasing, delivered via workshops or simulations, yield measurable improvements in bias detection, with participants showing 20-30% better performance on bias-laden puzzles post-training.[161]
- Technological aids: Software tools for randomization and blinding in analysis pipelines automate safeguards against confirmation-seeking, as applied in clinical trials to enhance validity.[160]
- Feedback loops: Regular debriefs incorporating objective metrics counteract perceptual blind spots, with organizational data indicating faster problem resolution in feedback-enabled teams.[162]
Complex Problem Characteristics
Defining Complexity and Wicked Problems
Complex problems in problem solving are distinguished by their inherent difficulty in prediction and management due to multiple interdependent elements, non-linear dynamics, and emergent behaviors that arise from interactions rather than individual components.[164] Unlike complicated problems, which can be decomposed into predictable, linear sequences amenable to expert analysis and replication—such as engineering a bridge—complex problems feature uncertainty, ambiguity, and feedback loops that amplify small changes into disproportionate outcomes, as seen in ecological systems or economic markets.[165] Empirical studies in systems science quantify this through metrics like interconnectedness (number of variables and linkages) and polytely (conflicting multiple goals), where solutions require adaptive strategies rather than optimization algorithms.[70] Wicked problems represent an extreme form of complexity, particularly in social, policy, and planning domains, where problems resist definitive resolution through conventional methods. Coined by Horst Rittel and Melvin Webber in their 1973 paper "Dilemmas in a General Theory of Planning," the term contrasts "wicked" issues with "tame" scientific puzzles, emphasizing that public policy challenges like urban poverty or environmental degradation defy clear boundaries and exhaustive analysis.[166] Rittel and Webber outlined ten defining properties: (1) no conclusive formulation, as understanding evolves with inquiry; (2) no stopping rule, lacking criteria for completion; (3) solutions are not true or false but better or worse, judged subjectively; (4) no immediate or ultimate test of solutions, with effects unfolding over time; (5) uniqueness, with no class of similar problems for generalization; (6) one-shot operations, where trial-and-error carries irreversible consequences; (7) non-enumerable exhaustive set of potential solutions; (8) each solution a 'one-shot operation' altering the problem; (9) discrepancy definable but resolvable only via argumentative planning, not formulas; and (10) the planner's authority to err is limited, imposing ethical stakes absent in tame domains. These characteristics highlight causal realism in wicked problems: interventions create path-dependent trajectories influenced by stakeholder values and incomplete information, often exacerbating issues through unintended feedbacks, as evidenced in case studies of policy failures like 20th-century urban renewal projects that displaced communities without resolving root inequities.[166] While complexity theory provides tools like agent-based modeling to simulate interactions—demonstrating, for instance, how traffic congestion emerges from individual driver behaviors rather than centralized flaws—wicked problems demand iterative, participatory approaches over top-down fixes, acknowledging that full solvability is illusory in open systems.[167] This distinction informs problem-solving efficacy: tame problems yield to algorithmic precision, but complex and wicked ones necessitate humility about limits, prioritizing robust heuristics over illusory certainty.[70]Domain-Specific vs. General Solvers Debate
The debate over domain-specific versus general solvers examines whether complex problem solving predominantly requires tailored expertise confined to a particular field or leverages broadly applicable cognitive mechanisms. Domain-specific proponents, drawing from expertise research, contend that mastery arises from domain-restricted knowledge structures, such as pattern recognition and procedural routines honed through deliberate practice, which enable efficient handling of field-specific complexities.[168] In fields like chess or medicine, experts demonstrate superior performance via automated heuristics and vast repositories of domain-tuned facts, often independent of baseline cognitive variance once thresholds are met.[169] Conversely, advocates for general solvers emphasize fluid intelligence factors—encompassing inductive reasoning, working memory, and abstract transfer—that facilitate adaptation to novel or ill-structured problems transcending silos.[170] Empirical investigations reveal that domain-general abilities often underpin the acquisition and application of expertise, particularly in dynamic or interdisciplinary contexts characteristic of complex problems. A 2016 meta-analysis of 2,313 chess players found cognitive ability correlating with skill level at r = 0.35, suggesting general intelligence constrains peak performance even among practitioners with thousands of hours of domain-specific training. Similarly, a 2023 study of primary school children solving science problems reported that domain-general executive functions and reasoning predicted outcomes more robustly than specific factual recall, with effect sizes indicating minimal unique variance from domain knowledge alone.[171] These findings challenge strict domain-specificity by showing limited far-transfer from specialized practice to unfamiliar variants, as general capacities govern problem representation and hypothesis generation.[172] For wicked problems—those with interdependent variables, incomplete information, and evolving stakes—the tension intensifies, as domain-specific silos may foster myopic framing while general solvers enable cross-domain synthesis. Longitudinal data on professional performance affirm that general cognitive ability retains predictive validity (β ≈ 0.5-0.6) for job-specific proficiency across experience levels, implying domain expertise amplifies but does not supplant foundational reasoning.[173] Critics of pure generalism note empirical ceilings, such as novices' inability to operationalize problems without baseline domain cues, yet syntheses favor hybrid models where general faculties scaffold specialized accrual. This interplay underscores that while domain-specific tools optimize routine efficacy, general solvers better navigate the uncertainty of complex, multifaceted challenges, with ongoing research quantifying their interplay via cognitive modeling.[174]Empirical Evidence on Solvability Limits
In computability theory, the halting problem—determining whether a given program will terminate on a specific input—has been proven undecidable, meaning no algorithm can solve it for all cases. This theoretical limit has empirical implications in software verification and debugging, where automated tools achieve high but incomplete coverage; for instance, static analysis detects only a fraction of potential infinite loops, with studies reporting that up to 20-30% of software defects stem from undecidable behaviors like non-termination in large codebases. Similarly, Rice's theorem generalizes this to any non-trivial property of program semantics, empirically observed in formal methods failures, such as the inability to prove liveness properties universally across systems without human intervention or approximations.[175] Beyond undecidability, computational complexity theory reveals intractability for problems in NP-complete classes, where exact solutions scale exponentially with input size, rendering them unsolvable in polynomial time on classical computers. Empirical hardness studies of satisfiability (SAT) problems, a canonical NP-complete case, demonstrate phase transitions: easy instances solve quickly, but those near the critical constraint density (around 4.2 clauses per variable) exhibit exponential runtime explosions, with solvers timing out on benchmarks involving thousands of variables despite decades of algorithmic improvements. Real-world applications underscore this; the traveling salesman problem (TSP), NP-hard, yields exact solutions only for instances under 100 cities using branch-and-bound methods, but logistics firms handling millions of routes rely on heuristics yielding 1-5% suboptimal results, as exhaustive search exceeds available computational resources even on supercomputers. Protein structure prediction, another intractable challenge, resisted exact computation until approximation breakthroughs, but fundamental limits persist for dynamic folding pathways due to combinatorial explosion in conformational space exceeding 10^300 possibilities.[175][176] In policy and social domains, wicked problems exhibit solvability limits through persistent recurrence and resistance to definitive resolution, as evidenced by longitudinal analyses of interventions. For example, urban poverty alleviation efforts, such as U.S. welfare reforms since the 1960s, have shown temporary reductions followed by rebounds, with meta-analyses indicating no sustained eradication due to interdependent factors like family structure, incentives, and cultural norms that defy linear causal fixes. Climate policy exemplifies this: despite trillions invested globally since the 1992 Rio Summit, emissions trajectories remain upward in key sectors, with econometric models revealing that regulatory approaches alter behaviors but trigger adaptive countermeasures (e.g., leakage to unregulated regions), supporting claims of inherent unsolvability absent paradigm shifts. Appeals to empirical evidence in such contexts often fail to converge on solutions, as stakeholder conflicts redefine problem boundaries iteratively, per analyses of over 40 years of wicked problem literature.[177][178] These limits extend to physical systems via chaos and quantum uncertainty, where empirical forecasting fails beyond short horizons. Weather prediction models, grounded in Navier-Stokes equations, achieve skill only up to 7-10 days, as demonstrated by European Centre for Medium-Range Weather Forecasts data showing error doubling times of 2-3 days due to sensitivity to initial conditions; beyond this, probabilistic ensembles replace deterministic solvability. In quantum mechanics, Heisenberg's uncertainty principle imposes irreducible measurement limits, empirically verified in electron diffraction experiments since 1927, precluding exact simultaneous position-momentum knowledge and thus full predictability for multi-particle systems. Collectively, these cases illustrate that while approximations mitigate practical impacts, fundamental solvability barriers—rooted in logical, computational, or causal incompleteness—persist across domains, constraining problem-solving efficacy to bounded regimes.[179]Individual and Collective Dimensions
Strengths of Individual Problem Solving
Individual problem solving permits autonomous reasoning free from interpersonal influences, enabling solvers to pursue unconventional paths without consensus requirements that often stifle innovation in groups. This independence mitigates risks of conformity, as demonstrated in studies where group members suppress dissenting views to maintain harmony, leading to suboptimal outcomes in historical cases like the Bay of Pigs invasion analyzed by Irving Janis in 1972.[180] In contrast, solitary thinkers retain full agency over idea evaluation, fostering originality unhindered by social cues.[181] A primary strength lies in avoiding social loafing, where group participants exert less effort due to diffused responsibility. Empirical experiments, such as those by Bibb Latané and colleagues in 1979, showed individuals pulling harder on ropes alone than in teams, with effort reductions up to 50% in larger groups; similar dynamics apply to cognitive tasks, preserving maximal personal investment.[182] This ensures accountability aligns directly with performance, unlike collectives where free-riding dilutes contributions.[183] Solitary approaches facilitate rapid iteration and deep concentration, unencumbered by coordination delays that extend group processes—often doubling decision times per organizational behavior research.[184] Incubation periods, allowing subconscious processing, prove more effective individually, as fixation from shared early ideas hampers group creativity; psychological studies confirm solitary breaks enhance insight problem-solving rates by 20-30% over continuous effort.[185] For intellective tasks relying on specialized knowledge, individuals leverage undiluted expertise, outperforming averages in nominal group comparisons where aggregated solo solutions exceed interactive deliberations.[186] In domains demanding divergent thinking, such as initial ideation, individuals generate more unique solutions absent production blocking—where group members wait to speak—and evaluation apprehension; a 1987 meta-analysis by Wolfgang Diehl and Wolfgang Stroebe found individual brainstorming yields 20-40% higher idea quantities than group sessions.[187] Thus, while collectives aggregate diverse inputs, individual solving excels in unbiased depth and efficiency for novel or routine challenges.Collaborative Approaches and Their Drawbacks
Collaborative problem solving involves methods such as brainstorming sessions, team deliberations, and structured group techniques where multiple individuals contribute ideas and refine solutions collectively.[188] These approaches leverage diverse perspectives to address complex issues, as seen in organizational settings and scientific teams.[189] One prominent method, brainstorming, originated with Alex Osborn in the 1940s and encourages free idea generation without immediate criticism.[188] However, empirical research demonstrates its limitations; groups engaging in verbal brainstorming produce fewer and less original ideas than the same number of individuals working independently, a phenomenon termed nominal group underperformance.[190] A key drawback is production blocking, where participants must wait their turn to speak, leading to forgotten ideas and disrupted cognitive flow.[191] Studies confirm that this blocking interferes with idea organization, particularly with longer delays between contributions, reducing overall creativity and quantity of outputs.[192] For instance, a 1987 analysis identified production blocking as the primary obstacle to group brainstorming efficacy compared to solitary ideation. Groupthink, conceptualized by Irving Janis in 1972, represents another critical flaw, wherein cohesive groups prioritize consensus over critical evaluation, suppressing dissent and overlooking alternatives.[194] This dynamic has been linked to flawed decisions in historical cases, such as policy fiascoes, due to symptoms like illusion of invulnerability and self-censorship.[195] Janis's framework highlights how structural factors, including group insulation and directive leadership, exacerbate these risks in problem-solving contexts.[196] Social loafing further undermines collaboration, as individuals exert less effort in groups, diffusing responsibility and reducing personal accountability.[182] Experimental evidence from tasks like rope pulling and idea generation shows participants performing at lower levels when contributions are not identifiable, a pattern persistent across team-based problem solving.[197] Comparative studies reveal that interacting groups often match only the performance of their best individual member, not surpassing it, due to these interpersonal and process inefficiencies.[198] While groups may outperform average individuals on certain tasks, the prevalence of these drawbacks—evident in meta-analyses from 1920 to 1957 and beyond—indicates that unmitigated collaboration can hinder rather than enhance problem-solving outcomes.[199] Techniques like nominal grouping or electronic brainstorming aim to address these issues but underscore the inherent challenges of group dynamics.[200]Hybrid Models and Real-World Efficacy
Hybrid models in problem solving integrate phases of independent individual work with structured group interactions to optimize outcomes by combining the depth of solitary cognition with the breadth of collective input, while curbing drawbacks such as groupthink and social loafing. These approaches, exemplified by the nominal group technique (NGT), involve silent individual idea generation followed by round-robin sharing, discussion, and voting, which empirical studies indicate produce more prioritized and feasible solutions than unstructured brainstorming.[201] A 1984 analysis highlighted NGT's superiority in eliciting diverse inputs without domination by vocal members, leading to consensus on high-quality decisions in applied settings like community planning and business strategy formulation.[201] Recent research on hybrid brainstorming reinforces this efficacy, demonstrating that alternating individual and group ideation phases generates superior idea quantity and quality compared to purely collaborative or solitary methods. For instance, a 2024 study found that hybrid procedures, irrespective of whether individual work precedes or follows group phases, outperform traditional group brainstorming by reducing production blocking and enhancing idea elaboration through scripted prompts and group awareness tools.[202] Another investigation in 2025 confirmed that initiating with individual ideation in hybrid collaborations boosts subsequent interactive quality and overall solution novelty, attributing gains to minimized early conformity pressures.[203] In real-world applications, hybrid models have proven effective across domains, including education and organizational innovation. A 2018 randomized trial in medical education compared pure problem-based learning (PBL), hybrid PBL (integrating self-directed individual study with small-group tutorials), and conventional lecturing, revealing that hybrid PBL significantly enhanced higher-order problem-solving skills, with students achieving 12-15% higher performance on clinical reasoning assessments.[204] In professional contexts, such as software development under agile frameworks, hybrid structures—featuring individual sprint tasks interspersed with daily stand-ups and retrospectives—correlate with reduced defect rates and faster issue resolution, as documented in industry analyses where teams reported 20-30% improvements in delivery predictability over waterfall methods.[205] These findings underscore hybrid models' robustness in complex, dynamic environments, though success hinges on facilitation to prevent dilution of individual contributions during integration phases.[206]Advances in AI and Computational Problem Solving
Historical AI Milestones
The Dartmouth Summer Research Project on Artificial Intelligence, held from June to August 1956 at Dartmouth College, marked the formal inception of AI as a field of study, with participants including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposing the development of machines that could simulate human intelligence, including "solving the kinds of problems now reserved for humans."[207] The conference's proposal emphasized programs for "automatic computer" problem-solving in areas like language translation and abstract concept formation, setting the agenda for symbolic AI approaches that prioritized logical reasoning over statistical methods.[208] This event catalyzed early funding and research, though initial optimism about rapid progress in general problem-solving proved overstated due to computational limitations and the complexity of non-numerical tasks.[209] In 1957, Allen Newell, J. C. Shaw, and Herbert A. Simon at RAND Corporation introduced the General Problem Solver (GPS), one of the first AI programs explicitly designed for heuristic problem-solving across diverse domains by applying means-ends analysis to reduce differences between current and goal states.[68] GPS successfully tackled well-defined puzzles such as the Tower of Hanoi and theorem-proving tasks, demonstrating that computers could replicate human-like search strategies without domain-specific coding, though it struggled with problems requiring deep contextual knowledge or ill-structured goals.[209] Building on their earlier Logic Theorist program from 1956, which automated mathematical proofs, GPS exemplified the "physical symbol system" hypothesis that intelligence arises from manipulating symbols according to rules, influencing subsequent cognitive science models of human reasoning.[210] Despite its generality claims, GPS's performance was confined to toy problems, highlighting early AI's limitations in scaling to real-world complexity without exhaustive rule sets.[211] The 1960s and 1970s saw the rise of expert systems, narrow AI applications encoding domain-specific knowledge into rule-based inference engines to solve practical problems beyond puzzles. In 1965, Edward Feigenbaum and Joshua Lederberg developed DENDRAL at Stanford, the first expert system, which analyzed mass spectrometry data to infer molecular structures by generating and testing hypotheses against empirical constraints.[212] This approach proved effective for chemistry diagnostics, achieving accuracy comparable to human experts through backward-chaining search and knowledge representation via production rules.[213] Subsequent systems like MYCIN (1976), also from Stanford, diagnosed bacterial infections and recommended antibiotics with 69% accuracy in clinical trials, outperforming some physicians by systematically evaluating symptoms, lab results, and therapeutic trade-offs.[209] Expert systems proliferated in the 1980s, with commercial successes in fields like finance and engineering, but their brittleness—failing outside encoded rules—and knowledge acquisition bottlenecks contributed to the second AI winter by the late 1980s, underscoring that rule-based methods scaled poorly for dynamic or uncertain environments.[210] A landmark in specialized search-based problem-solving occurred in 1997 when IBM's Deep Blue supercomputer defeated world chess champion Garry Kasparov 3.5–2.5 in a six-game rematch, evaluating up to 200 million positions per second via minimax alpha-beta pruning and custom hardware accelerators.[214] Deep Blue's success relied on vast opening books, endgame databases, and evaluation functions tuned by grandmasters, rather than learning, demonstrating that brute-force computation combined with heuristics could surpass human intuition in constrained, perfect-information games.[215] While not general intelligence, this milestone validated AI's efficacy for combinatorial optimization problems, influencing later advances in game theory and planning algorithms, though critics noted chess's narrow scope limited broader transferability to open-ended solving.[209]Recent Breakthroughs (2010s-2025)
In 2016, DeepMind's AlphaGo program defeated Go world champion Lee Sedol in a five-game match held in Seoul, winning 4-1 and demonstrating superhuman performance in a game requiring long-term strategic planning amid an estimated 10^170 possible positions.[216] This breakthrough combined deep neural networks with Monte Carlo tree search, marking a pivotal advance in AI's ability to handle combinatorial explosion in imperfect-information games.[217] Building on this, AlphaGo Zero, released in 2017, achieved mastery of Go, chess, and shogi through pure self-play reinforcement learning without any human game data, outperforming prior AlphaGo variants after three days of training on vastly superior hardware.[218] In 2020, DeepMind's AlphaFold 2 system resolved the long-standing protein structure prediction problem, achieving median backbone accuracy of 92.4 atomic root-mean-square error (RMSD) on CASP14 targets, enabling rapid modeling of biological structures previously requiring years of lab work.[219] The 2020s saw integration of large language models (LLMs) with structured reasoning techniques. In December 2023, DeepMind's FunSearch method leveraged LLMs for evolutionary program synthesis, yielding new solutions to the cap set problem in combinatorial mathematics that exceeded prior human-discovered bounds by generating programs scoring up to 512 in higher dimensions.[220] OpenAI's o1 model, previewed in September 2024, incorporated extended chain-of-thought reasoning during inference, improving performance on complex tasks like PhD-level science questions by factors of 2-10 over predecessors through simulated deliberation steps.[221] Further progress in formal mathematical reasoning emerged in 2024 with DeepMind's AlphaProof, which solved four of six International Mathematical Olympiad (IMO) problems at silver-medal level using reinforcement learning combined with formal proof verification in Lean, tackling competition problems requiring novel insights.[74] AlphaFold 3, announced in May 2024, extended predictions to biomolecular complexes including DNA, RNA, and ligands with 50% improved accuracy over AlphaFold 2 on protein-ligand interactions.[222] By September 2025, Google DeepMind's Gemini 2.5 model addressed real-world engineering optimization problems that eluded human programmers, such as efficient circuit design under constraints, via enhanced planning and simulation capabilities.[223] These developments highlight AI's shift toward scalable, generalizable solvers for domains from games to science, though limitations persist in extrapolation beyond training distributions.[224]Human-AI Synergies and Ethical Considerations
Human-AI synergies in problem solving leverage complementary strengths, with AI systems excelling in rapid data processing, pattern recognition, and scalable computation, while humans contribute contextual understanding, ethical judgment, and creative intuition.[225] Empirical studies indicate that such collaborations can enhance performance in tasks requiring both analytical precision and innovative synthesis, such as content creation and certain decision-making scenarios, where hybrid teams outperform solo human or AI efforts.[226] For instance, in medical diagnostics, AI algorithms combined with radiologist oversight have improved cancer detection accuracy beyond individual capabilities, as AI identifies subtle anomalies in imaging data that humans might overlook, supplemented by human evaluation of clinical context.[227] However, research reveals limitations and task-dependent outcomes, with human-AI teams sometimes underperforming the superior of human or AI alone, particularly in structured analytical tasks like fake review detection, where AI achieved 73% accuracy compared to 55% for humans but hybrid setups did not consistently exceed the AI baseline.[228] A 2024 meta-analysis across diverse domains found that while synergies aid creative problem-solving, such as text and image generation, they falter in high-stakes decisions without robust integration, often due to coordination challenges, reduced trust, and mismatched cognitive processes.[229] In scientific problem-solving contexts, human-human collaboration has demonstrated larger improvements over baselines than human-AI pairings, highlighting the value of shared human intuition in navigating ambiguous "wicked" problems.[230] Ethical considerations in human-AI synergies emphasize accountability, as AI decisions in critical applications—such as healthcare or policy—raise questions of responsibility when errors occur, necessitating "human-in-the-loop" mechanisms to ensure oversight and transparency.[231] AI systems can perpetuate or amplify biases embedded in training data, potentially leading to flawed problem-solving outcomes unless humans actively intervene, a risk compounded by over-reliance that may erode human analytical skills over time.[232] Frameworks for ethical deployment stress maintaining human autonomy, fostering trust through explainable AI, and addressing equity issues, such as unequal access to advanced tools, to prevent synergies from exacerbating societal divides rather than resolving complex problems.[233][234]References
- https://math.[answers.com](/page/Answers.com)/math-and-arithmetic/Differences_between_routine_and_non_routine_mathematics_problem_solving
- https://sebokwiki.org/wiki/Complexity
- https://thedecisionlab.com/reference-guide/[management](/page/Management)/brainstorming