Hubbry Logo
Bloom (test)Bloom (test)Main
Open search
Bloom (test)
Community hub
Bloom (test)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Bloom (test)
Bloom (test)
from Wikipedia

Bloom is a test used to measure the strength of a gel, most commonly gelatin. The test was originally developed and patented in 1925 by Oscar T. Bloom.[1] The test determines the weight in grams needed by a specified plunger (normally with a diameter of 0.5 inch) to depress the surface of the gel by 4 mm without breaking it at a specified temperature.[2] The number of grams is called the Bloom value, and most gelatins are between 30 and 300 g Bloom. The higher a Bloom value, the higher the melting and gelling points of a gel, and the shorter its gelling times.[2] This method is most often used on soft gelatin capsules ("softgels"). To perform the Bloom test on gelatin, a lab keeps a 6.67% gelatin solution for 17–18 hours at 10 °C prior to testing it.

Various gelatins are categorized as "low Bloom", "medium Bloom", or "high Bloom", but there are not universally defined specific values for these subranges. Gelatin is a biopolymer material composed of polypeptide chains of varying length. The longer the chain, the higher the Bloom number:[3]

Gelatin classes
Category Bloom number (Bloom strength) Average molecular mass Examples
Low Bloom 30–150[4] 20,000–25,000 Beef hide low Bloom gelatin (USP-NF)[5]
Medium Bloom 150–225 40,000–50,000 Gelatin type B[6]
High Bloom 225–325 50,000–100,000 Gelatin type A[6]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Bloom test is a standardized procedure for measuring the gel strength, or firmness, of and similar gelling agents, expressed in Bloom units (grams of ). Developed in 1925 by American chemist Oscar T. Bloom, a researcher at the Swift & Company meat-packing firm in , the test uses a specialized device called a gelometer to assess quality for consistent production in , pharmaceutical, and photographic applications. In the Bloom test, a 6.67% by weight solution in is prepared, poured into a bloom , and allowed to mature for 16 to 18 hours at a controlled of 10°C (50°F) to form a . A cylindrical , typically 1/2 inch (12.7 mm) in , is then lowered onto the gel surface under controlled conditions, depressing it exactly 4 mm (5/32 inch); the maximum force required, measured in grams, directly corresponds to the Bloom value, with higher values indicating stronger gels. Bloom strengths are categorized as low (under 125 Bloom, for soft textures like whipped toppings), medium (125–225 Bloom, for yogurts and aspics), or high (225–300 Bloom, for firm products like marshmallows and capsules), ensuring across batches in industries where gel consistency affects texture, stability, and . Modern implementations often employ texture analyzers for precision, replacing manual gelometers while adhering to the original methodology standardized by the Gelatin Manufacturers Institute of America.

History and Development

Origins in Industrial Research

The Bloom test originated in the early amid the growth of the American meat-packing industry, which sought reliable methods to utilize animal byproducts like hides and bones for producing . In the , consistent quality was crucial for applications in , pharmaceuticals, and early photographic films, where variations in gel strength could affect product stability and performance. This need drove research into standardized testing to ensure reproducibility across production batches. Oscar T. Bloom, an American chemist employed at Swift & Company in , addressed these challenges during his work on gelatin extraction and . Swift & Company, a leading meat-packing firm, relied on for various industrial uses, prompting Bloom to develop a precise measurement technique. His efforts reflected broader industrial trends toward and in , similar to movements in other sectors post-World War I.

Invention and Standardization

The Bloom test was invented in 1925 by Oscar T. Bloom (1881–1965), who created a specialized instrument known as the gelometer to quantify firmness. This device applied a controlled force via a plunger to a prepared gelatin sample, measuring the resistance in grams—now termed Bloom units—to provide an objective metric for gel strength. Bloom's innovation, patented and implemented at Swift & Company, enabled consistent evaluation of gelatin derived from animal , revolutionizing in the industry. By the mid-20th century, the method gained widespread adoption and was formalized through standards set by the Manufacturers of America (GMIA), established in 1940 to promote uniform testing protocols. The GMIA's official methods, including precise preparation (6.67% gelatin solution matured at 10°C) and measurement procedures, ensured across manufacturers, with the test remaining the global standard for edible and technical gelatins as of 2025. Modern adaptations use digital texture analyzers while preserving the core 1925 methodology.

Original Taxonomy

Cognitive Domain Levels

The original Bloom's Taxonomy, published in , delineates the cognitive domain into six hierarchical levels that represent progressively complex mental processes, from basic recall to advanced judgment, providing a framework for designing educational assessments that target specific intellectual skills. These levels emphasize the development of thinking abilities essential for learning and testing, with each building upon the previous to foster deeper understanding and application in test items. Level 1: Knowledge involves the recall of specific facts, terms, basic concepts, or methods without necessarily understanding their meaning, serving as the foundational level for rote memorization in assessments. In test design, this level is commonly assessed through multiple-choice questions requiring the identification of definitions or dates, such as "List the three branches of " or "State the for the ." Associated action verbs include define, list, recall, name, and identify, which guide the creation of straightforward recall-based items. Level 2: Comprehension requires understanding the meaning of information, including interpreting, summarizing, or extrapolating from material, demonstrating grasp beyond mere repetition. Test items at this level might ask students to explain concepts in their own words, such as "Summarize the main " or "Interpret the significance of a poem's ," often using open-ended questions to gauge interpretive skills. Key verbs encompass explain, describe, interpret, summarize, and compare, facilitating assessments that verify conceptual understanding. Level 3: Application entails using acquired knowledge or methods in new and concrete situations to solve problems or demonstrate skills, bridging theory to practice. Examples in testing include problem-solving tasks like "Apply the to find the length of a against a " or "Use historical precedents to predict outcomes in a current event scenario," typically through scenario-based or calculation questions. Relevant verbs are , demonstrate, solve, use, and illustrate, which support the development of practical test formats. Level 4: Analysis focuses on breaking down information into its component parts and understanding how they relate to form a whole, emphasizing and differentiation. In assessments, this might involve items such as "Compare the arguments for and against a " or "Analyze the structure of an to identify its thesis and supporting ," often requiring diagramming or comparative essays. Action verbs like analyze, compare, differentiate, examine, and contrast are used to craft questions that probe relational insights. Level 5: Synthesis involves combining elements from diverse sources to create a new, coherent structure or product, representing creative integration at a higher cognitive plane. Test examples include "Design an experiment to test a hypothesis on plant growth" or "Formulate a business plan integrating economic principles," assessed via projects or original compositions that evaluate novelty. Verbs such as synthesize, create, design, formulate, and develop direct the formulation of innovative assessment tasks. Level 6: Evaluation demands making judgments about the value of ideas, methods, or materials based on or external criteria, culminating in critical assessment. Relevant test items could be " the validity of a study's conclusions" or " the effectiveness of a using ethical standards," often through debates or evaluative essays. Associated verbs include , , appraise, critique, and justify, enabling tests that measure reasoned decision-making.

Structure and Hierarchical Framework

The original presents a hierarchical framework for classifying educational objectives within the cognitive domain, structured as a non-numerical scale that progresses from simpler to more complex intellectual behaviors. This hierarchy consists of , where each higher level presupposes the acquisition and integration of skills from the preceding ones, though the progression is not strictly linear in application. For instance, achieving proficiency at higher levels, such as , requires foundational competencies in and comprehension, emphasizing cumulative development rather than isolated steps. The handbook focused exclusively on the cognitive domain, delineating intellectual abilities and skills related to knowledge acquisition and manipulation, while briefly noting plans for parallel taxonomies in the affective and psychomotor domains. These additional domains, addressing attitudes, values, and physical skills respectively, were intended to form a comprehensive tripartite structure but were developed and published separately in subsequent works, leaving the cognitive framework as the initial and primary contribution. The taxonomy's primary purpose was to establish a common language among educators for articulating, classifying, and exchanging educational goals, thereby facilitating the design of curricula, assessments, and that cover a balanced spectrum of cognitive demands. By providing standardized categories, it enabled educators to ensure that tests and learning objectives span the hierarchy, avoiding overemphasis on lower-level recall at the expense of . Intended as a flexible framework rather than a rigid or exhaustive system, the classified numerous educational objectives compiled by the authoring group, offering illustrative examples and subcategories to guide application without claiming to encompass every possible cognitive behavior. This non-exhaustive approach allowed for adaptation across diverse educational contexts, with the emphasis on utility in promoting clearer communication and systematic evaluation of learning outcomes.

Revised Taxonomy

Key Modifications

The 2001 revision of was led by Lorin Anderson, a former student of , and David Krathwohl, one of the original contributors, and was published as A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives by Longman. This update addressed several limitations identified in the original framework, including critiques raised by Bloom himself during the 1990s regarding its hierarchical rigidity and emphasis on static knowledge over dynamic processes. A primary structural change involved shifting the terminology from nouns to action-oriented verbs to better reflect cognitive processes as active engagements rather than mere states, for instance, transforming "" into "Remembering" to highlight retrieval as a behavioral action. Additionally, the revision reordered the highest levels of the cognitive domain by placing "Creating" (formerly "Synthesis") as the new highest level above "" (formerly "Evaluation"), recognizing that while critical judgment often precedes novel production in learning sequences, creating represents the pinnacle of cognitive processes. The most significant organizational innovation was the introduction of a two-dimensional matrix framework, which intersects the cognitive process —comprising six levels: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating—with a featuring four categories: Factual, Conceptual, Procedural, and Metacognitive. This expansion aimed to provide a more comprehensive tool for educators by accounting for both the type of involved and the mental operations applied to it, thereby enhancing its utility in assessment and .

Updated Levels and Dimensions

The revised introduces six cognitive process levels, shifting from nouns to action verbs to emphasize processes. These levels form a hierarchical continuum from lower-order to : (1) Remembering, which involves retrieving relevant from , such as recognizing or recalling facts; (2) Understanding, which entails constructing meaning through interpreting, exemplifying, classifying, summarizing, inferring, comparing, or explaining; (3) Applying, which requires executing or implementing procedures in a given situation; (4) Analyzing, which breaks material into constituent parts and determines how they relate, including differentiating, organizing, or attributing; (5) Evaluating, which involves making judgments based on criteria and standards through checking or critiquing; and (6) Creating, the highest level, where one generates, plans, or produces new structures or patterns by recombining elements. Complementing these cognitive processes is the knowledge dimension, a new addition that categorizes the types of knowledge learners engage with across four levels. Factual knowledge encompasses basic elements, such as and specific details necessary to identify what is being communicated. Conceptual knowledge involves understanding interrelationships among basic elements within a larger , including knowledge of classifications, principles, generalizations, theories, models, or structures. Procedural knowledge covers how to do something, encompassing subject-specific skills, algorithms, techniques, methods, or criteria for using skills, tools, or formats. Finally, metacognitive knowledge refers to awareness of one's own and factors influencing it, including strategic knowledge about cognitive tasks, self-knowledge, and contextual awareness. The revised organizes these components into a two-dimensional matrix, a 6x4 grid that intersects the cognitive process levels (columns) with the knowledge dimensions (rows) to classify educational objectives and assessment tasks more precisely. This structure allows for nuanced descriptions of learning outcomes, such as the cell for Analyzing (level 4) and (row 3), which might involve algorithms by differentiating steps and organizing error patterns. The matrix facilitates alignment between instructional goals and evaluation methods, enabling educators to map complex objectives across both dimensions. In assessment contexts, the matrix supports targeted test design by specifying intersections relevant to different question types. For instance, Remembering Factual Knowledge could be assessed through multiple-choice items requiring recall of historical dates, such as identifying the year of a major event. At the higher end, Creating Conceptual Knowledge might involve open-ended tasks where students design a scientific model integrating principles like to explain evolutionary patterns. These examples illustrate how the framework guides the creation of assessments that progressively challenge learners from basic retrieval to innovative synthesis.

Applications in Testing

The Bloom test is applied across industries to assess gelatin quality, ensuring consistent gel strength for product performance, texture, and stability. It standardizes evaluation by measuring the force required to deform a , with results guiding formulation and . Bloom values are categorized as low (under 125 g, for soft gels), medium (125–225 g, for semi-firm textures), or high (225–300 g, for firm structures), as defined by the Gelatin Manufacturers Institute of America (GMIA).

Gelatin Quality Control in Food Production

In the , the Bloom test evaluates for applications requiring specific textures, such as whipped toppings (low Bloom for softness), yogurts and aspics (medium Bloom for moderate firmness), and marshmallows or gummies (high Bloom for chewiness and shape retention). For example, a 225 Bloom ensures marshmallows maintain structure during molding and storage, preventing collapse and ensuring uniform . This testing aligns with production standards to achieve reproducibility, affecting and sensory qualities; inconsistencies can lead to product failure, such as overly soft or brittle confections. Modern texture analyzers automate the process, measuring in grams to 4 mm depression, adhering to GMIA protocols for 6.67% solutions ripened at 10°C.

Standards Compliance in Pharmaceuticals and Other Industries

In pharmaceuticals, the Bloom test verifies gelatin suitability for soft capsules, where higher Bloom values (e.g., 200–250 g) provide robust shells that protect contents and control dissolution rates, critical for drug efficacy and . For instance, low-Bloom gelatin may be used for rapid-release formulations, while high-Bloom ensures stability in long-term storage. This alignment with pharmacopeial standards, such as USP-NF, prevents issues like capsule brittleness or leakage. Historically, the test supported photographic gelatin for films, assessing gel firmness to maintain clarity, though digital alternatives have reduced this use. In and industrial applications, it guides formulations for creams and adhesives, ensuring durability. Case studies, such as in capsule , show Bloom testing reduces batch variability by 20–30%, improving compliance and yield.

Criticisms and Limitations

Theoretical Challenges

One major theoretical challenge to lies in its foundational assumption of a strict , where progress cumulatively from lower levels like and comprehension to higher ones such as and . Critics argue that this model does not accurately reflect human cognition, as higher-order skills can emerge independently without prerequisite mastery of lower levels; for instance, an individual may exhibit creative intuition or evaluative in a domain without strong recall of factual details. This flaw is highlighted by Marzano, who notes that the hierarchical structure lacks both logical coherence and empirical support, as complex thinking often involves non-sequential integration of skills rather than linear buildup. Another significant critique concerns the taxonomy's overemphasis on the cognitive domain, which sidelines the affective domain (encompassing emotions, attitudes, and values) and the psychomotor domain (involving physical and manual skills). In testing contexts, this narrow focus results in assessments that inadequately cover holistic learning outcomes, potentially undervaluing emotional intelligence or practical competencies critical for comprehensive evaluation. Although companion taxonomies exist for affective (Krathwohl et al., 1964) and psychomotor (Simpson, 1972) domains, Bloom's original framework's dominance in cognitive-centric test design perpetuates incomplete coverage, as noted in analyses of its limitations for balanced educational assessment. The also faces accusations of cultural and contextual biases, stemming from its development in a mid-20th-century Western, individualistic context that prioritizes abstract, analytical thinking over relational or communal forms of knowledge. This orientation renders it less suitable for environments or non-verbal cultural traditions common in non-Western societies, where knowledge construction often emphasizes and contextual rather than solitary . Critiques from the , particularly those examining and epistemological diversity, underscore how the model marginalizes alternative "ways of knowing," such as connected or relational epistemologies that do not align with its linear, objective progression. Empirically, the taxonomy's posited progression of cognitive levels in test performance remains weakly validated, with limited studies demonstrating consistent difficulty gradients or dependency across categories. reveals that real-world learning tasks frequently blend multiple levels without clear hierarchical advancement, challenging the framework's utility for predicting or measuring skill development in assessments. This absence of robust validation evidence undermines its theoretical robustness, as early formulations preceded modern insights into non-linear skill acquisition.

Practical Implementation Issues

One major practical challenge in applying to test design is the subjectivity involved in classifying assessment items to specific cognitive levels, which often results in low . Different educators may assign the same question to varying levels due to ambiguous boundaries between categories, such as between applying and analyzing. Studies have documented inter-rater agreement rates as low as 46%, with reliability scores of 0.25 in classifications by experienced faculty, indicating substantial disagreement that undermines consistent application. Creating higher-level assessment items aligned with is labor-intensive and resource-demanding, contributing to an over-reliance on lower-level multiple-choice questions, particularly in large-scale testing environments. Developing questions for evaluating or creating requires extensive planning, scenario-building, and validation of distractors or rubrics, often taking significantly more time than simpler recall-based items. In practice, the efficiency of multiple-choice formats in standardized tests favors remembering and understanding levels, as evidenced by analyses of large-scale assessments where over 70% of items target lower despite educational goals emphasizing higher-order skills. This imbalance stems from constraints like large class sizes and tight timelines, limiting educators' ability to diversify question types. The complexity of the revised Bloom's Taxonomy's two-dimensional matrix further exacerbates implementation issues, particularly for novice teachers who find it overwhelming to align both knowledge dimensions (e.g., factual vs. conceptual) and cognitive processes in test design. This matrix structure, while comprehensive, demands advanced training to avoid mismatches between intended objectives and actual assessments. Moreover, many digital testing platforms are ill-equipped to support higher creative levels, prioritizing automated grading for objective formats over subjective evaluation of open-ended or generative responses. Surveys from the 2010s across K-12 and higher education contexts reveal inconsistent adoption of the taxonomy, with only partial integration in curricula and frequent calls for targeted training programs to improve practical usability.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.