Recent from talks
Contribute something
Nothing was collected or created yet.
Bees algorithm
View on WikipediaIn computer science and operations research, the bees algorithm is a population-based search algorithm which was developed by Pham, Ghanbarzadeh et al. in 2005.[1] It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.[2][3][4][5][6]
Metaphor
[edit]A colony of honey bees can extend itself over long distances (over 14 km)[7] and in multiple directions simultaneously to harvest nectar or pollen from multiple food sources (flower patches). A small fraction of the colony constantly searches the environment looking for new flower patches. These scout bees move randomly in the area surrounding the hive, evaluating the profitability (net energy yield) of the food sources encountered.[7] When they return to the hive, the scouts deposit the food harvested. Those individuals that found a highly profitable food source go to an area in the hive called the “dance floor”, and perform a ritual known as the waggle dance.[8] Through the waggle dance a scout bee communicates the location of its discovery to idle onlookers, which join in the exploitation of the flower patch. Since the length of the dance is proportional to the scout’s rating of the food source, more foragers get recruited to harvest the best rated flower patches. After dancing, the scout returns to the food source it discovered to collect more food. As long as they are evaluated as profitable, rich food sources will be advertised by the scouts when they return to the hive. Recruited foragers may waggle dance as well, increasing the recruitment for highly rewarding flower patches. Thanks to this autocatalytic process, the bee colony is able to quickly switch the focus of the foraging effort on the most profitable flower patches.[7]
Algorithm
[edit]The bees algorithm[2][9] mimics the foraging strategy of honey bees to look for the best solution to an optimisation problem. Each candidate solution is thought of as a food source (flower), and a population (colony) of n agents (bees) is used to search the solution space. Each time an artificial bee visits a flower (lands on a solution), it evaluates its profitability (fitness).
The bees algorithm consists of an initialisation procedure and a main search cycle which is iterated for a given number T of times, or until a solution of acceptable fitness is found. Each search cycle is composed of five procedures: recruitment, local search, neighbourhood shrinking, site abandonment, and global search.
Pseudocode for the standard bees algorithm[2] 1 for i = 1, ..., ns i scout[i] = Initialise_scout() ii flower_patch[i] = Initialise_flower_patch(scout[i]) 2 do until stopping_condition = TRUE i Recruitment() ii for i = 1, ..., na 1 flower_patch[i] = Local_search(flower_patch[i]) 2 flower_patch[i] = Site_abandonment(flower_patch[i]) 3 flower_patch[i] = Neighbourhood_shrinking(flower_patch[i]) iii for i = nb, ..., ns 1 flower_patch[i] = Global_search(flower_patch[i])}
In the initialisation routine ns scout bees are randomly placed in the search space, and evaluate the fitness of the solutions where they land. For each solution, a neighbourhood (called flower patch) is delimited.
In the recruitment procedure, the scouts that visited the nb≤ns fittest solutions (best sites) perform the waggle dance. That is, they recruit foragers to search further the neighbourhoods of the most promising solutions. The scouts that located the very best ne≤nb solutions (elite sites) recruit nre foragers each, whilst the remaining nb-ne scouts recruit nrb≤nre foragers each. Thus, the number of foragers recruited depends on the profitability of the food source.
In the local search procedure, the recruited foragers are randomly scattered within the flower patches enclosing the solutions visited by the scouts (local exploitation). If any of the foragers in a flower patch lands on a solution of higher fitness than the solution visited by the scout, that forager becomes the new scout. If no forager finds a solution of higher fitness, the size of the flower patch is shrunk (neighbourhood shrinking procedure). Usually, flower patches are initially defined over a large area, and their size is gradually shrunk by the neighbourhood shrinking procedure. As a result, the scope of the local exploration is progressively focused on the area immediately close to the local fitness best. If no improvement in fitness is recorded in a given flower patch for a pre-set number of search cycles, the local maximum of fitness is considered found, the patch is abandoned (site abandonment), and a new scout is randomly generated.
As in biological bee colonies,[7] a small number of scouts keeps exploring the solution space looking for new regions of high fitness (global search). The global search procedure re-initialises the last ns-nb flower patches with randomly generated solutions.
At the end of one search cycle, the scout population is again composed of ns scouts: nr scouts produced by the local search procedure (some of which may have been re-initialised by the site abandonment procedure), and ns-nb scouts generated by the global search procedure. The total artificial bee colony size is n=ne•nre+(nb-ne)•nrb+ns (elite sites foragers + remaining best sites foragers + scouts) bees.
Variants
[edit]In addition to the basic bees algorithm,[9] there are a number of improved or hybrid versions of the BA, each of which focuses on some shortcomings of the basic BA. These variants include (but are not limited to) fuzzy or enhanced BA (EBA),[10] grouped BA (GBA),[5] hybrid modified BA (MBA)[11] and so on. The pseudo-code for the grouped BA (GBA) [5] is as follows.
function GBA
%% Set the problem parameters
maxIteration = ..; % number of iterations (e.g. 1000-5000)
maxParameters = ..; % number of input variables
min = [..] ; % an array of the size maxParameters to indicate the minimum value of each input parameter
max = [..] ; % an array of the size maxParameters to indicate the maximum value of each input parameter
%% Set the grouped bees algorithm (GBA) parameters
R_ngh = ..; % patch radius of the neighborhood search for bees in the first group (e.g. 0.001 - 1)
n = ..; % number of scout bees (e.g. 4-30)
nGroups = ..; % number of groups, excluding the random group
%% GBA's automatic parameter settings
k = 3 * n / ((nGroups+1)^3 - 1); % GBA's parameter to set the number of scout bees in each group
groups = zeros(1,nGroups); % An array to keep the number of scout bees for each group
recruited_bees = zeros(1,nGroups); % An array to keep the number of recruited bees for each group
a = (((max - min) ./ 2) - R_ngh) ./ (nGroups^2 - 1); % GBA's parameter for setting neighborhood radiuses
b = R_ngh - a; % GBA's parameter for setting neighborhood radiuses
for i=1:nGroups % For each group
groups(i) = floor(k*i^2); % determine the number of scout bees in each group
if groups(i) == 0
groups(i) = 1; % there has to be at least one scout bee per each group
end
recruited_bees = (nGroups+1-i)^2; % set the number of recruited bees for each group
ngh(i) = a * i*i + b; % set the radius patch for each group
end
group_random = n - sum(groups); % assign the remainder bees (if any) to random search
group_random = max(group_random,0); % make sure it is not a negative number
%% initialize the population matrix
population = zeros(n,maxParameters+1); % A population of n bees including all input variables and their fitness
for i=1:n
population(i,1:maxParameters)= generate_random_solution(maxParameters,min, max); % random initialization of maxParameters variables between max and min
population(i,maxParameters+1) = evalulate_fitness(population(i,:)); % fitness evaluation of each solution and saving it at the last index of the population matrix
end
sorted_population = sortrows(population); % sort the population based on their fitnesses
%% Iterations of the grouped bees algorithm
for i=1:maxIteration % GBA's main loop
beeIndex = 0; % keep track of all bees (i.e, patches)
for g=1:nGroups % for each group of scout bees
for j = 1 : groups(g) % exploit each patch within each group
beeIndex = beeIndex + 1; % increase the counter per each patch
for i = 1 : recruited_bees(g) % for each recruited bees of the group
solution = bee_waggle_dance(sorted_population(beeIndex,1:maxParameters),ngh(g)); % search the neighborhood around selected patch/solution within the radius of ngh
fit = evaluate_fitness(solution); % evaluate the fitness of recently found solution
if fit < sorted_population(beeIndex,maxParameters+1) % A minimization problem: if a better location/patch/solution is found by the recuiter bee
sorted_population(beeIndex,1 : maxParameters+1) = [solution(1 : maxParameters),fit]; % copy new solution and its fitness to the sorted population matrix
end
end
end
end
for i= 1 : group_random % For the remaining random bees
beeIndex = beeIndex + 1;
solution(beeIndex,1:maxParameters)= generate_random_solution(maxParameters,min, max); % generate a new random solution at the index beeIndex
solution(beeIndex,maxParameters+1)= evaluate_fitness(solution); % evaluate its fitness
sorted_population(beeIndex,:) = [solution(1 : maxParameters),fit]; % copy the new random solution and its fitness to the sorted population matrix
end
sorted_population=sortrows(sorted_population); % sort the population based on their fitnesses
Best_solution_sofar=sorted_population(1,:);
disp('Best:');disp(Best_solution_sofar); % Display the best solution of current iteration
end % end of GBA's main loop
end % end of main function
%% Function Bee Waggle Dance
function new_solution=bee_waggle_dance(solution, ngh, maxParameters)
new_solution(1:maxParameters) = (solution-ngh)+(2*ngh.*rand(1, maxParameters));
end
See also
[edit]References
[edit]- ^ Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005.
- ^ a b c Pham, D.T., Castellani, M. (2009), The Bees Algorithm – Modelling Foraging Behaviour to Solve Continuous Optimisation Problems. Proc. ImechE, Part C, 223(12), 2919-2938.
- ^ Pham, D.T. and Castellani, M. (2013), Benchmarking and Comparison of Nature-Inspired Population-Based Continuous Optimisation Algorithms, Soft Computing, 1-33.
- ^ Pham, D.T. and Castellani, M. (2015), A comparative study of the bees algorithm as a tool for function optimisation, Cogent Engineering 2(1), 1091540.
- ^ a b c Nasrinpour, H. R., Massah Bavani, A., Teshnehlab, M., (2017), Grouped Bees Algorithm: A Grouped Version of the Bees Algorithm, Computers 2017, 6(1), 5; (doi:10.3390/computers6010005)
- ^ Baronti, Luca & Castellani, Marco & Pham, D.. (2020),An Analysis of the Search Mechanisms of the Bees Algorithm., Swarm and Evolutionary Computation. 59. 100746. 10.1016/j.swevo.2020.100746
- ^ a b c d Tereshko V., Loengarov A., (2005) Collective Decision-Making in Honey Bee Foraging Dynamics Archived 2014-02-01 at the Wayback Machine. Journal of Computing and Information Systems, 9(3), 1-7.
- ^ Von Frisch, K. (1967) The Dance Language and Orientation of Bees. Harvard University Press, Cambridge, Massachusetts.
- ^ a b Pham D.T., Ghanbarzadeh A., Koc E., Otri S., Rahim S., Zaidi M., The Bees Algorithm, A Novel Tool for Complex Optimisation Problems, Proc 2nd Int Virtual Conf on Intelligent Production Machines and Systems (IPROMS 2006), Oxford: Elsevier, pp. 454-459, 2006.
- ^ Pham D. T., Haj Darwish A., (2008), A. Fuzzy Selection of Local Search Sites in the Bees Algorithm. Proceedings of Innovative Production Machines and Systems (IPROMS 2008)
- ^ Pham Q. T., Pham D. T., Castellani M., A modified Bees Algorithm and a statistics-based method for tuning its parameters. Proceedings of the Institution of Mechanical Engineers (ImechE), Part I: Journal of Systems and Control Eng., 2011 (doi:10.1177/0959651811422759)
External links
[edit]Bees algorithm
View on GrokipediaBiological Inspiration and History
Foraging Behavior of Honey Bees
Honey bee colonies exhibit sophisticated foraging behavior to locate and exploit food sources efficiently. Scout bees, a subset of foragers, explore the environment randomly to discover new nectar and pollen patches, with foraging distances up to about 14 km reported in literature, and typical ranges averaging 3-6 km depending on conditions and season but extending farther during resource scarcity.[6][7] These scouts assess potential sites independently before returning to the hive. Upon discovery, successful scouts communicate the location and quality of food sources through the waggle dance, a series of figure-eight movements performed on the vertical comb inside the hive. The dance encodes directional information relative to the sun's position and distance via the duration and vigor of the waggle phase, while the frequency and enthusiasm of dances—such as the number of waggles and accompanying thoracic vibrations—correlate directly with the profitability of the source, recruiting more foragers to richer patches.[8][9] Colonies optimize resource allocation by dividing labor between exploratory scouts, who prioritize discovering novel sites, and recruited foragers, who focus on exploiting known high-yield locations, thereby balancing the trade-off between exploration and exploitation at the group level.[10][11] This division allows the colony to adapt foraging effort dynamically to environmental conditions. Individual bees evaluate food sources based on nectar quality, primarily the sugar concentration providing energy, alongside factors like distance from the hive—which increases energy expenditure for round trips—and overall accessibility, accepting sources only if the net energy gain justifies the cost.[12] These mechanisms enable efficient collective foraging, analogous to global search via scouts and local refinement through recruits in optimization strategies.Development of the Algorithm
The Bees Algorithm was introduced in 2005 by D.T. Pham and colleagues, including A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, and M. Zaidi, at the Manufacturing Engineering Centre of Cardiff University, United Kingdom, as a population-based metaheuristic optimization technique inspired by the foraging behavior of honey bees.[1] The algorithm was designed to efficiently search for near-optimal solutions in complex, multi-variable problems by simulating the collective intelligence of bee swarms.[13] The initial description appeared in a technical note published in December 2005, which outlined the core principles and preliminary simulations on benchmark functions.[1] This was followed by a more detailed exposition in a 2006 book chapter titled "The Bees Algorithm—A Novel Tool for Complex Optimisation Problems," which expanded on the method's implementation and provided empirical evidence of its performance.[14] The development was motivated by the shortcomings of existing optimization approaches, particularly classical methods that often require excessive computational time or fail to achieve precision on NP-hard problems, as well as limitations in other swarm intelligence techniques like particle swarm optimization (PSO).[1] Unlike PSO, which relies on fixed social interactions without adaptive refinement, the Bees Algorithm incorporates a neighborhood shrinking mechanism to progressively focus searches around promising sites, enabling better balance between global exploration and local exploitation.[13] The 2006 exposition validated the algorithm's effectiveness through applications to standard benchmark functions, such as De Jong's and Shekel's Foxholes, where it demonstrated superior speed and success rates compared to genetic algorithms, ant colony optimization, and PSO.[13] Further validations in 2009 extended to mechanical design problems, including welded beam and coil spring design optimization, confirming its utility for both continuous and combinatorial optimization tasks by yielding competitive or improved solutions in terms of structural integrity and cost minimization.[15]Core Algorithm
Initialization Phase
The initialization phase of the Bees Algorithm establishes the starting point for the optimization process by generating and assessing an initial set of candidate solutions. It begins with the random placement of scout bees across the defined search space, where each scout bee represents a potential solution vector generated uniformly at random within the problem's bounds.[16] This step emulates the exploratory behavior of scout bees in natural honey bee colonies, which randomly survey areas for food sources.[16] Following generation, the fitness of each scout bee's position is evaluated using the problem's objective function. For minimization problems, the fitness is typically computed as , where is the objective function value at solution , ensuring that lower objective values correspond to higher fitness scores; adjustments such as in the denominator handle cases where .[17] This evaluation ranks the sites visited by the scouts based on their fitness, providing a baseline for subsequent selection.[16] From these initial evaluations, the fittest sites are selected to serve as priority locations for further exploration in later phases, with elite sites among them allocated more resources.[16]Search and Recruitment Process
In the Bees Algorithm, the search and recruitment process constitutes the core iterative mechanism that drives optimization by mimicking the foraging recruitment behavior of honey bees. Following the evaluation of the initial population generated by scout bees, the algorithm identifies promising solution sites based on their fitness values. The best-performing sites are classified into elite and non-elite categories, with e elite sites selected from the top m sites overall. To exploit these promising areas, nep forager bees are allocated to each of the e elite sites, while nsp forager bees (where nsp < nep) are sent to each of the remaining (m - e) non-elite sites. This differential recruitment strategy concentrates more computational effort on higher-quality regions, enhancing local exploitation while maintaining diversity.[1] Each recruited forager bee then performs a neighborhood search around its assigned site, generating new candidate solutions by perturbing the site coordinates within a defined search radius. This local exploration involves evaluating the objective function at these new points to assess their fitness relative to the current site representative. The process draws from the natural tendency of bees to waggle-dance and recruit others to richer food patches, thereby focusing the swarm's effort on potentially superior solutions. For each site, multiple candidate solutions are produced and evaluated, allowing the algorithm to probe the local landscape efficiently.[13] The remaining bees perform random global searches to generate additional candidate solutions across the search space. Upon completing all searches, the fittest solution from each of the m sites' foragers is selected as the new site representative, replacing the previous if improved. All n new solutions (from local and global searches) are evaluated and ranked by fitness. The global best solution is updated if any new candidate surpasses it. This ranking and selection step emulates the bees' collective decision-making to prioritize the best foraging patches.[1][2] The entire recruitment and search cycle repeats iteratively, forming successive populations until a predefined stopping criterion is satisfied, such as reaching a maximum number of function evaluations or achieving a convergence threshold where improvements fall below a specified tolerance. This looped structure balances intensification around elite sites with broader exploration, progressively refining the search toward optimal solutions in the problem space. In practice, this process has demonstrated effectiveness in navigating complex, multimodal landscapes by adaptively allocating resources based on prior fitness assessments.[13]Neighborhood Search and Site Abandonment
In the Bees Algorithm, neighborhood search enables local exploitation around promising solution sites by dispatching forager bees to explore nearby regions. For each selected site, follower bees generate candidate solutions through random perturbations within a defined neighborhood radius , typically computed as for each dimension of the problem space.[18] This mechanism mimics bees foraging in patches around food sources, allowing the algorithm to refine solutions in high-quality areas while balancing computational effort.[19] The algorithm maintains a balance between local exploitation and global exploration by differentiating treatment of elite and non-elite sites during neighborhood search. Elite sites, identified as the top solutions, receive more foragers (e.g., ) , while non-elite selected sites get fewer foragers (e.g., ).[18] This recruitment disparity, briefly referencing prior site selection, ensures efficient resource allocation akin to bee colony dynamics.[19]Parameters and Implementation
Key Parameters
The Bees Algorithm relies on a set of tunable parameters to govern its exploration and exploitation behaviors, mimicking the foraging dynamics of honey bee colonies. These parameters define the population size, selection of promising sites, recruitment intensity, and local search scope. Proper tuning of these parameters is essential for balancing global search breadth with local optimization depth, though optimal values are problem-dependent and often determined empirically.[16] The number of scout bees, denoted as n, specifies the total population of bees deployed for random exploration in each cycle, directly controlling the algorithm's initial coverage of the search space and overall computational effort. Typical values include n = 50.[2][1] The number of best sites selected, m, indicates how many of the n evaluated positions are chosen for further neighborhood investigation after each scouting phase, focusing computational resources on potentially promising regions. This parameter influences the algorithm's ability to concentrate efforts without overlooking diversity; common values include m = 15.[2][20] The number of elite sites, e, defines the subset of the m selected sites deemed most superior for intensive local search, allocating disproportionate resources to high-quality areas to accelerate convergence. Typically, e is set to 3.[1][20][2] The recruitment parameters nep (foragers per elite site) and nsp (foragers per non-elite site) control the number of bees assigned to search neighborhoods around elite and remaining selected sites, respectively, thereby balancing intensive exploitation at top sites with moderate probing elsewhere. Typical values are nep = 12 and nsp = 8.[1][20][2] The initial neighborhood radius, ngh, sets the size of the local search patch around each selected site, determining the scope of perturbations in the solution space during recruitment. This parameter is problem-specific, often set to 1.0, though it can be adapted based on the search space dimensions.[1][21][2]Pseudocode and Practical Considerations
The Bees Algorithm can be outlined in pseudocode as follows, based on its core procedure for population-based optimization:Initialize population of n scout bees with random solutions
Evaluate fitness of the population
While stopping criterion not met:
Select m sites for neighborhood search based on best fitness
Recruit nep bees to e best sites and nsp bees to remaining (m - e) sites
Evaluate fitness of recruited bees in neighborhoods of size ngh
Select the fittest bee from each of the m patches
Assign remaining bees to random search and evaluate their fitness
Update global best solution if improved
End While
Initialize population of n scout bees with random solutions
Evaluate fitness of the population
While stopping criterion not met:
Select m sites for neighborhood search based on best fitness
Recruit nep bees to e best sites and nsp bees to remaining (m - e) sites
Evaluate fitness of recruited bees in neighborhoods of size ngh
Select the fittest bee from each of the m patches
Assign remaining bees to random search and evaluate their fitness
Update global best solution if improved
End While
Variants and Extensions
Basic Variants
The basic variants of the Bees Algorithm introduced targeted modifications to the core foraging process, aiming to enhance robustness without extensive hybridization. These early adaptations focused on automating decisions in site selection and recruitment, enabling parallel exploration, and incorporating dynamic adjustments to neighborhood searches, thereby addressing challenges like excessive parameter tuning and stagnation in multimodal landscapes. The Fuzzy Bees Algorithm, developed by Pham and Darwish in 2008, employs fuzzy logic to automate the selection of local search sites and the allocation of recruited bees, effectively managing uncertainty in fitness evaluations. Unlike the original algorithm's fixed thresholds for elite and selected sites, this variant uses a fuzzy inference system with linguistic rules based on site fitness ranks to determine the number of sites (m) and bees per site dynamically, often reducing tunable parameters from six to fewer while maintaining exploration-exploitation balance. This leads to more stable performance across varying problem complexities, as demonstrated in applications to controller optimization where fuzzy selection improved solution quality by adapting to noisy fitness landscapes.[23] The Grouped Bees Algorithm (GBA), proposed by Nasrinpour et al. in 2017, partitions the scout bees into independent groups assigned to subspaces of the search domain, supporting parallel processing to boost scalability on large-scale optimization tasks. Each group conducts autonomous foraging with tailored neighborhood sizes—smaller for exploitation in promising areas and larger for broader exploration—allowing concurrent evaluation of diverse regions without central coordination. This structure mitigates computational bottlenecks in high-dimensional spaces, with implementations showing up to linear speedup in distributed environments for problems like resource allocation.[24] The Enhanced Bees Algorithm (EBA), introduced by Yuce et al. in 2013, augments the neighborhood search with adaptive sizing and probabilistic site abandonment to hasten convergence and navigate multimodal functions more effectively. Neighborhood radii shrink after a fixed number of stagnant iterations (e.g., 10 repetitions without improvement) to intensify local exploitation, while expanding (by a factor of 2) upon progress to encourage broader scouting; sites are abandoned after 100 failed searches to escape local optima. Tested on standard benchmarks, EBA achieved superior results on multimodal problems, such as reducing mean error on the 10-dimensional Ackley function to 0.0063 from 1.2345 in the basic variant, using roughly 10% fewer evaluations on the Rastrigin function (93,580 vs. 885,000). These changes lessen dependence on static parameters like patch size, yielding consistent gains in 2009–2015 studies on functions with multiple minima.[25] Collectively, these variants reduced parameter sensitivity through automation and adaptation, as evidenced in meta-optimization approaches like Otri's 2011 framework. For multimodal handling, EBA and similar tweaks outperformed the original on deceptive landscapes like Griewank, attaining near-global optima in 30–50% fewer iterations in comparative evaluations from the period.[26]Advanced and Hybrid Approaches
Advanced approaches to the Bees Algorithm (BA) have focused on integrating elements from other metaheuristics to enhance exploration, diversity, and adaptability, particularly in complex optimization landscapes. One prominent hybrid is the Genetic Bees Algorithm (GBA), which incorporates genetic operators such as crossover and mutation into the BA framework to improve population diversity and global search capabilities. In GBA, crossover exchanges genetic material between selected solutions during the global search phase, while mutation introduces random perturbations to prevent premature convergence to local optima. This integration has been shown to reduce convergence time by approximately 60% and improve solution quality by up to 30% on constrained scheduling benchmarks compared to the standard BA.[27] To address stagnation in local optima, chaotic variants incorporate chaos maps for enhanced randomization in initialization and search processes. The hybrid Cuckoo Search–Bees Algorithm (CSBA) combines BA's foraging mechanics with cuckoo search's Levy flight-inspired global jumps, using a 4D memristive Lu chaotic map to generate diverse initial populations. This chaotic initialization promotes ergodicity and sensitivity to initial conditions, enabling better escape from local minima while maintaining BA's neighborhood exploitation. On cryptographic S-box generation benchmarks, CSBA achieves superior nonlinearity (average 109.75) and strict avalanche criterion (0.5000) compared to standalone BA (nonlinearity 109.25), demonstrating improved robustness in discrete optimization tasks.[28] Simplification efforts have led to the Single-Parameter Bees Algorithm (BA1), which reduces the original BA's multiple tunable parameters (e.g., elite sites, neighborhood sizes) to a single one—the number of scout bees (n=100)—via adaptive mechanisms like incremental k-means clustering for patch formation and dynamic adjustment of shrinking rates. This design minimizes user overhead in parameter tuning, making it suitable for practical implementations in high-dimensional problems. Evaluated on CEC 2014 benchmark functions, BA1 attains exact solutions for 8 out of 23 problems and ranks first on 12 others, with low variance indicating stable performance across dimensions up to 50.[29] Hybrids with neural networks enable dynamic parameter adaptation, particularly for machine learning optimization. The BA-Bayesian Optimization-Convolutional Neural Network (BA-BO-CNN) uses BA to fine-tune learning rates derived from Bayesian optimization across CNN layers, enhancing weight updates for deeper architectures. This approach boosts validation accuracy by 1.5% (from 80.72% to 82.22%) on CIFAR-10 datasets while reducing training time, highlighting its efficacy in real-time deep learning applications post-2020.[30] More recent enhancements include the Fibonacci-inspired Bees Algorithm (2024), which incorporates the Fibonacci sequence to guide neighborhood sizing for improved convergence on benchmark functions.[31] Neighborhood search enhancements via Levy flights further bolster handling of multimodal and high-dimensional landscapes. The Patch-Levy-based Bees Algorithm (PLIA_BA) employs Levy flight distributions to initialize and explore patches, generating step sizes with heavy-tailed properties for balanced local intensification and global diversification. This variant outperforms standard BA and related algorithms like ABC on 12 scalable benchmarks, achieving higher success rates and faster convergence in dimensions up to 100, with improvements in solution quality by 10-20% on select CEC-like functions. These advancements collectively address BA's limitations in scalability and stagnation, yielding 10-20% better overall performance on CEC benchmarks in recent evaluations.[32]Applications and Performance
Engineering and Design Optimization
The Bees Algorithm has been applied extensively in engineering design optimization, leveraging its population-based search mechanism to address complex, constrained problems involving multiple objectives such as cost minimization, structural integrity, and performance efficiency.[15] In mechanical design, early applications focused on optimizing weldment structures and pressure vessel designs, where the algorithm efficiently navigates non-linear constraints to yield practical improvements. Studies from 2006 to 2010 demonstrated its ability to reduce design costs significantly, with reported improvements up to 15% compared to traditional methods like genetic algorithms, by refining parameters such as material thickness and joint dimensions while satisfying stress and deflection limits.[33][15] For instance, in welded beam optimization, the algorithm minimized fabrication costs to approximately 1.725 units, outperforming prior benchmarks by balancing shear stress and buckling constraints.[15] In manufacturing, the Bees Algorithm has addressed scheduling and facility layout challenges, particularly in job-shop environments where minimizing makespan—the total completion time—is critical for operational efficiency. Applications around 2012 utilized enhanced variants to sequence jobs across machines, achieving near-optimal schedules for instances with 40 to 100 jobs by incorporating neighborhood search strategies like job swaps and subsequence reversals.[34] For facility layout problems, hybrid versions combining the Bees Algorithm with particle swarm optimization have optimized department placements in multi-story buildings, reducing material handling costs and improving workflow in manufacturing settings, as demonstrated in a 2011 case study of a 28-facility hospital layout with fitness values improved by over 20% relative to standalone methods.[35] Civil engineering applications of the Bees Algorithm emphasize structural optimization, such as truss designs that balance weight reduction with strength requirements under load constraints. In 2015 studies, multi-objective variants minimized truss volume and nodal displacements while adhering to cross-sectional area limits, producing Pareto-optimal sets that enhanced structural reliability for space truss configurations.[36] These approaches clustered solutions to manage trade-offs, yielding designs with up to 10% weight savings without compromising stress thresholds.[36] Notable specific examples include Pham et al.'s 2007 application to robotic arm trajectory planning, where a Pareto-based multi-objective Bees Algorithm optimized minimum-time paths for a SCARA-type arm, incorporating smoothness constraints and outperforming genetic algorithms and simplex methods in travel time reduction.[37] More recently, in 2018, an improved Bees Algorithm variant was employed for sizing hybrid renewable energy systems, including wind turbines, in off-grid desalination setups, optimizing PV-wind-battery-hydrogen configurations to minimize levelized costs while ensuring reliable power output for wind-integrated farms.[38]Benchmarking and Comparisons
The Bees Algorithm (BA) has been evaluated on standard benchmark functions such as the Sphere and Rastrigin functions in studies from 2009 to 2015, demonstrating competitive performance particularly on multimodal landscapes. In a 2013 analysis, the standard BA achieved near-optimal results on the 10-dimensional Sphere function (mean absolute difference of 0.0000 after 285,039 evaluations), matching the performance of particle swarm optimization (PSO) and artificial bee colony (ABC) algorithms.[2] On the 10-dimensional Rastrigin function, however, BA yielded a mean absolute difference of 24.8499, underperforming ABC (0.0000) but showing robustness in multimodal exploration compared to evolutionary algorithms (EA) which scored 2.9616.[2] A 2015 comparative study further highlighted BA's superiority on multimodal functions like Rastrigin, achieving a 100% success rate across 17 of 18 custom 2D benchmarks, outperforming EA on deceptive landscapes and PSO on origin-biased terrains.[39] In terms of convergence, early implementations of BA have shown faster rates than genetic algorithms (GA) in reaching optima on continuous optimization problems, attributed to its efficient site selection and neighborhood search mechanisms.[1] Comparisons with other metaheuristics reveal BA's strengths in exploration: versus PSO, BA provides better global search on multimodal problems but exhibits slower convergence on unimodal functions due to its patch-based scouting.[39] Relative to ABC, which shares bee-inspired foraging, BA's dynamic neighborhood shrinking enhances performance on multimodal benchmarks by reducing premature convergence, though ABC often excels in speed on separable functions.[40] Against differential evolution (DE), BA demonstrates advantages in discrete optimization spaces, such as combinatorial problems, where DE's mutation strategies can lead to stagnation, as evidenced in hybrid BA-DE studies achieving improved solution quality.[41] Key strengths of BA include its simplicity with few tunable parameters (e.g., number of bees and patch sizes), enabling ease of implementation, and a balanced exploration-exploitation via random scouting and elite site refinement, making it robust to fitness landscape variations.[39] Limitations encompass parameter sensitivity, where suboptimal choices for neighborhood size (ngh) can degrade performance, and potential stagnation in very high-dimensional spaces (e.g., >30D), as BA's fixed patch sizes may overlook distant optima.[40] Recent analyses from 2020 to 2024, including hybrid variants, indicate enhanced scalability and performance on modern benchmarks. A 2021 study introduced BA with search space reduction (BAwSSR), outperforming standard BA, PSO (SPSO2011), and quick ABC (qABC) on 20 of 24 functions, including Sphere and Rastrigin, with mean accuracies of 4.10E-32 and 0.00E+00 respectively, confirmed significant via Mann-Whitney tests (α=0.05).[40] These hybrids address earlier scalability issues in high-dimensional CEC-style suites by incorporating segmentation, achieving up to 100% success rates and reduced evaluations (e.g., 102 for Sphere), thus extending BA's applicability beyond pre-2020 limitations. As of 2024, the Bees Algorithm continues to be explored in applications such as robotic disassembly planning.[40][42]| Function (10D) | Algorithm | Mean Fitness | Success Rate (%) | Evaluations (Avg.) |
|---|---|---|---|---|
| Sphere | BAwSSR | 4.10E-32 | 100 | 102 |
| Sphere | Standard BA | 8.19E-04 | 100 | 9180 |
| Rastrigin | BAwSSR | 0.00E+00 | 100 | 102 |
| Rastrigin | Standard BA | 1.33E+01 | 0 | 500000 |
