Hubbry Logo
Object pool patternObject pool patternMain
Open search
Object pool pattern
Community hub
Object pool pattern
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Object pool pattern
Object pool pattern
from Wikipedia

The object pool pattern is a software creational design pattern that uses a set of initialized objects kept ready to use – a "pool" – rather than allocating and destroying them on demand. A client of the pool will request an object from the pool and perform operations on the returned object. When the client has finished, it returns the object to the pool rather than destroying it; this can be done manually or automatically.

Object pools are primarily used for performance: in some circumstances, object pools significantly improve performance. Object pools complicate object lifetime, as objects obtained from and returned to a pool are not actually created or destroyed at this time, and thus require care in implementation.

Description

[edit]

When it is necessary to work with numerous objects that are particularly expensive to instantiate and each object is only needed for a short period of time, the performance of an entire application may be adversely affected. An object pool design pattern may be deemed desirable in cases such as these.

The object pool design pattern creates a set of objects that may be reused. When a new object is needed, it is requested from the pool. If a previously prepared object is available, it is returned immediately, avoiding the instantiation cost. If no objects are present in the pool, a new item is created and returned. When the object has been used and is no longer needed, it is returned to the pool, allowing it to be used again in the future without repeating the computationally expensive instantiation process. It is important to note that once an object has been used and returned, existing references will become invalid.

In some object pools the resources are limited, so a maximum number of objects is specified. If this number is reached and a new item is requested, an exception may be thrown, or the thread will be blocked until an object is released back into the pool.

The object pool design pattern is used in several places in the standard classes of the .NET Framework. One example is the .NET Framework Data Provider for SQL Server. As SQL Server database connections can be slow to create, a pool of connections is maintained. Closing a connection does not actually relinquish the link to SQL Server. Instead, the connection is held in a pool, from which it can be retrieved when requesting a new connection. This substantially increases the speed of making connections.

Benefits

[edit]

Object pooling can offer a significant performance boost in situations where the cost of initializing a class instance is high and the rate of instantiation and destruction of a class is high – in this case objects can frequently be reused, and each reuse saves a significant amount of time. Object pooling requires resources – memory and possibly other resources, such as network sockets, and thus it is preferable that the number of instances in use at any one time is low, but this is not required.

The pooled object is obtained in predictable time when creation of the new objects (especially over network) may take variable time. These benefits are mostly true for objects that are expensive with respect to time, such as database connections, socket connections, threads and large graphic objects like fonts or bitmaps.

In other situations, simple object pooling (that hold no external resources, but only occupy memory) may not be efficient and could decrease performance.[1] In case of simple memory pooling, the slab allocation memory management technique is more suited, as the only goal is to minimize the cost of memory allocation and deallocation by reducing fragmentation.

Implementation

[edit]

Object pools can be implemented in an automated fashion in languages like C++ via smart pointers. In the constructor of the smart pointer, an object can be requested from the pool, and in the destructor of the smart pointer, the object can be released back to the pool. In garbage-collected languages, where there are no destructors (which are guaranteed to be called as part of a stack unwind), object pools must be implemented manually, by explicitly requesting an object from the factory and returning the object by calling a dispose method (as in the dispose pattern). Using a finalizer to do this is not a good idea, as there are usually no guarantees on when (or if) the finalizer will be run. Instead, "try ... finally" should be used to ensure that getting and releasing the object is exception-neutral.

Manual object pools are simple to implement, but harder to use, as they require manual memory management of pool objects.

Handling of empty pools

[edit]

Object pools employ one of three strategies to handle a request when there are no spare objects in the pool.

  1. Fail to provide an object (and return an error to the client).
  2. Allocate a new object, thus increasing the size of the pool. Pools that do this usually allow you to set the high water mark (the maximum number of objects ever used).
  3. In a multithreaded environment, a pool may block the client until another thread returns an object to the pool.

Pitfalls

[edit]

Care must be taken to ensure the state of the objects returned to the pool is reset back to a sensible state for the next use of the object, otherwise the object may be in a state unexpected by the client, which may cause it to fail. The pool is responsible for resetting the objects, not the clients. Object pools full of objects with dangerously stale state are sometimes called object cesspools and regarded as an anti-pattern.

Stale state may not always be an issue; it becomes dangerous when it causes the object to behave unexpectedly. For example, an object representing authentication details may fail if the "successfully authenticated" flag is not reset before it is reused, since it indicates that a user is authenticated (possibly as someone else) when they are not. However, failing to reset a value used only for debugging, such as the identity of the last authentication server used, may pose no issues.

Inadequate resetting of objects can cause information leaks. Objects containing confidential data (e.g. a user's credit card numbers) must be cleared before being passed to new clients, otherwise, the data may be disclosed to an unauthorized party.

If the pool is used by multiple threads, it may need the means to prevent parallel threads from trying to reuse the same object in parallel. This is not necessary if the pooled objects are immutable or otherwise thread-safe.

Criticism

[edit]

Some publications do not recommend using object pooling with certain languages, such as Java, especially for objects that only use memory and hold no external resources (such as connections to database). Opponents usually say that object allocation is relatively fast in modern languages with garbage collectors; while the operator new needs only ten instructions, the classic new - delete pair found in pooling designs requires hundreds of them as it does more complex work. Also, most garbage collectors scan "live" object references, and not the memory that these objects use for their content. This means that any number of "dead" objects without references can be discarded with little cost. In contrast, keeping a large number of "live" but unused objects increases the duration of garbage collection.[1]

Examples

[edit]

C++

[edit]

In C++26, the C++ Standard Library introduces a new header <hive> with the data structure std::hive, which essentially implements an object pool. It is a collection that reuses erased elements' memory. Along with it is a class std::hive_limits for layout information on block capacity limits.[2]

import std;

using std::hive;
using std::plus;

int main(int argc, char* argv[]) {
    hive<int> intHive;

    // Insert 100 ints:
    for (int i = 0; i < 100; ++i) {
        intHive.insert(i);
    }

    // Erase half of them:
    for (int i: intHive) {
        intHive.erase(i);
    }

    int total = std::ranges::fold_left(intHive, 0, plus<int>());
    std::println("Total of all elements: {}", total);

    return 0;
}

C#

[edit]

In the .NET Base Class Library there are a few objects that implement this pattern. System.Threading.ThreadPool is configured to have a predefined number of threads to allocate. When the threads are returned, they are available for another computation. Thus, one can use threads without paying the cost of creation and disposal of threads.

The following shows the basic code of the object pool design pattern implemented using C#. Pool is shown as a static class, as it's unusual for multiple pools to be required. However, it's equally acceptable to use instance classes for object pools.

using System;
using System.Collections.Generic;

namespace Wikipedia.Examples;

// The PooledObject class is the type that is expensive or slow to instantiate,
// or that has limited availability, so is to be held in the object pool.
public class PooledObject
{
    private DateTime _createdAt = DateTime.Now;

    public DateTime CreatedAt => _createdAt;

    public string TempData { get; set; }
}

// The Pool class controls access to the pooled objects. It maintains a list of available objects and a 
// collection of objects that have been obtained from the pool and are in use. The pool ensures that released objects 
// are returned to a suitable state, ready for reuse. 
public static class Pool
{
    private static List<PooledObject> _available = new();
    private static List<PooledObject> _inUse = new();

    public static PooledObject GetObject()
    {
        lock (_available)
        {
            if (_available.Count != 0)
            {
                PooledObject po = _available[0];
                _inUse.Add(po);
                _available.RemoveAt(0);
                return po;
            }
            else
            {
                PooledObject po = new();
                _inUse.Add(po);
                return po;
            }
        }
    }

    public static void ReleaseObject(PooledObject po)
    {
        CleanUp(po);

        lock (_available)
        {
            _available.Add(po);
            _inUse.Remove(po);
        }
    }

    private static void CleanUp(PooledObject po)
    {
        po.TempData = null;
    }
}

In the code above, the PooledObject has properties for the time it was created, and another, that can be modified by the client, that is reset when the PooledObject is released back to the pool. Shown is the clean-up process, on release of an object, ensuring it is in a valid state before it can be requested from the pool again.

Go

[edit]

The following Go code initializes a resource pool of a specified size (concurrent initialization) to avoid resource race issues through channels, and in the case of an empty pool, sets timeout processing to prevent clients from waiting too long.

// package pool
package pool

import (
	"errors"
	"log"
	"math/rand"
	"sync"
	"time"
)

const getResMaxTime = 3 * time.Second

var (
	ErrPoolNotExist  = errors.New("pool not exist")
	ErrGetResTimeout = errors.New("get resource time out")
)

//Resource
type Resource struct {
	resId int
}

// NewResource Simulate slow resource initialization creation
// (e.g., TCP connection, SSL symmetric key acquisition, auth authentication are time-consuming)
func NewResource(id int) *Resource {
	time.Sleep(500 * time.Millisecond)
	return &Resource{resId: id}
}

// Do Simulation resources are time consuming and random consumption is 0~400ms
func (r *Resource) Do(workId int) {
	time.Sleep(time.Duration(rand.Intn(5)) * 100 * time.Millisecond)
	log.Printf("using resource #%d finished work %d finish\n", r.resId, workId)
}

// Pool based on Go channel implementation, to avoid resource race state problem
type Pool chan *Resource

// New a resource pool of the specified size
// Resources are created concurrently to save resource initialization time
func New(size int) Pool {
	p := make(Pool, size)
	wg := new(sync.WaitGroup)
	wg.Add(size)
	for i := 0; i < size; i++ {
		go func(resId int) {
			p <- NewResource(resId)
			wg.Done()
		}(i)
	}
	wg.Wait()
	return p
}

// GetResource based on channel, resource race state is avoided and resource acquisition timeout is set for empty pool
func (p Pool) GetResource() (r *Resource, err error) {
	select {
	case r := <-p:
		return r, nil
	case <-time.After(getResMaxTime):
		return nil, ErrGetResTimeout
	}
}

// GiveBackResource returns resources to the resource pool
func (p Pool) GiveBackResource(r *Resource) error {
	if p == nil {
		return ErrPoolNotExist
	}
	p <- r
	return nil
}

// package main
package main

import (
	"github.com/tkstorm/go-design/creational/object-pool/pool"
	"log"
	"sync"
)

func main() {
	// Initialize a pool of five resources,
	// which can be adjusted to 1 or 10 to see the difference
	size := 5
	p := pool.New(size)

	// Invokes a resource to do the id job
	doWork := func(workId int, wg *sync.WaitGroup) {
		defer wg.Done()
		// Get the resource from the resource pool
		res, err := p.GetResource()
		if err != nil {
			log.Println(err)
			return
		}
		// Resources to return
		defer p.GiveBackResource(res)
		// Use resources to handle work
		res.Do(workId)
	}

	// Simulate 100 concurrent processes to get resources from the asset pool
	num := 100
	wg := new(sync.WaitGroup)
	wg.Add(num)
	for i := 0; i < num; i++ {
		go doWork(i, wg)
	}
	wg.Wait()
}

Java

[edit]

Java supports thread pooling via java.util.concurrent.ExecutorService and other related classes. The executor service has a certain number of "basic" threads that are never discarded. If all threads are busy, the service allocates the allowed number of extra threads that are later discarded if not used for the certain expiration time. If no more threads are allowed, the tasks can be placed in the queue. Finally, if this queue may get too long, it can be configured to suspend the requesting thread.

In PooledObject.java:

package org.wikipedia.examples;

public class PooledObject {
	private String temp1;
	private String temp2;
	private String temp3;
	
	public String getTemp1() {
		return temp1;
	}

	public void setTemp1(String temp1) {
		this.temp1 = temp1;
	}

	public String getTemp2() {
		return temp2;
	}

	public void setTemp2(String temp2) {
		this.temp2 = temp2;
	}

	public String getTemp3() {
		return temp3;
	}

	public void setTemp3(String temp3) {
		this.temp3 = temp3;
	}
}

In PooledObjectPool.java:

package org.wikipedia.examples;

import java.util.HashMap;
import java.util.Map;

public class PooledObjectPool {
	private static long expTime = 6000; // 6 seconds
	public static Map<PooledObject, Long> available = new HashMap<PooledObject, Long>();
	public static Map<PooledObject, Long> inUse = new HashMap<PooledObject, Long>();
	
	public synchronized static PooledObject getObject() {
		long now = System.currentTimeMillis();
		if (!available.isEmpty()) {
			for (Map.Entry<PooledObject, Long> entry : available.entrySet()) {
				if (now - entry.getValue() > expTime) {
                    // object has expired
					popElement(available);
				} else {
					PooledObject po = popElement(available, entry.getKey());
					push(inUse, po, now); 
					return po;
				}
			}
		}

		// either no PooledObject is available or each has expired, so return a new one
		return createPooledObject(now);
	}	
	
	private synchronized static PooledObject createPooledObject(long now) {
		PooledObject po = new PooledObject();
		push(inUse, po, now);
		return po;
    }

	private synchronized static void push(HashMap<PooledObject, Long> map, PooledObject po, long now) {
		map.put(po, now);
	}

	public static void releaseObject(PooledObject po) {
		cleanUp(po);
		available.put(po, System.currentTimeMillis());
		inUse.remove(po);
	}
	
	private static PooledObject popElement(HashMap<PooledObject, Long> map) {
		 Map.Entry<PooledObject, Long> entry = map.entrySet().iterator().next();
		 PooledObject key = entry.getKey();
		 // Long value = entry.getValue();
		 map.remove(entry.getKey());
		 return key;
	}
	
	private static PooledObject popElement(HashMap<PooledObject, Long> map, PooledObject key) {
		map.remove(key);
		return key;
	}
	
	public static void cleanUp(PooledObject po) {
		po.setTemp1(null);
		po.setTemp2(null);
		po.setTemp3(null);
	}
}

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The object pool pattern is a creational in that maintains a collection of pre-instantiated, reusable objects to fulfill client requests, thereby minimizing the performance overhead associated with frequently creating and destroying resource-intensive objects. This approach involves a central pool manager—often implemented as a singleton—that allocates objects from the pool when available or creates new ones if the pool is depleted, and reclaims them upon release for future use. The primary motivation for the object pool pattern stems from scenarios where object instantiation is computationally expensive or involves significant resource acquisition, such as database connections, threads, or components, allowing applications to achieve better efficiency and . By reusing objects rather than allocating and deallocating them repeatedly, the pattern reduces memory fragmentation, garbage collection pressure in managed languages like .NET or , and overall system latency. It is particularly applicable in high-throughput environments, such as web servers or game engines, where the rate of object requests exceeds the cost of pool management. In terms of structure, the typically comprises three core components: the reusable objects (instances that can be safely reset and reused), the client (code that acquires and releases objects via the pool), and the pool manager (a controller that tracks availability, enforces pool size limits, and handles acquisition and release operations). Implementations often incorporate thread-safety mechanisms, such as concurrent collections in .NET (e.g., ConcurrentBag<T>) or primitives in other languages, to support multi-threaded access without race conditions. While not part of the original catalog, the builds on creational principles and is classified as a strategy in modern literature. Common real-world applications include connection pooling in database systems to limit open connections and buffer pools in operating systems for efficient data access. In game development, it optimizes rendering by recycling particle effects or enemy entities, preventing allocation spikes during intense gameplay. Frameworks like Microsoft's Microsoft.Extensions.ObjectPool provide built-in support, simplifying adoption while allowing customization for specific pool sizes and eviction policies.

Fundamentals

Definition and Purpose

The object pool pattern is a creational that preallocates a set of initialized objects, maintaining them in a pool for reuse rather than creating and destroying instances on demand. This approach involves clients acquiring objects from the pool when needed and returning them upon completion, allowing the same resources to serve multiple requests efficiently. The primary purpose of the object pool pattern is to optimize performance in environments where object creation and destruction incur significant overhead, such as establishing database connections, managing threads, or rendering graphical elements. By reusing pre-initialized objects, the pattern minimizes the computational cost associated with frequent allocations, which is particularly beneficial in high-throughput systems. In managed languages like and .NET, this also helps mitigate garbage collection pressure caused by rapid object turnover, as fewer allocations reduce the frequency and duration of collection cycles. Key characteristics of the object pool pattern include support for fixed or dynamic pool sizes to control resource limits, explicit acquire and release mechanisms to manage object lifecycle, and considerations for thread-safety in concurrent environments to prevent race conditions during access. Pools can be configured with minimum and maximum capacities to balance memory usage against availability, ensuring scalability without excessive overhead.

Benefits

The object pool pattern provides significant performance improvements by minimizing the overhead associated with repeated object creation and destruction. In scenarios where objects are short-lived and frequently instantiated, such as in high-throughput applications, pooling reduces the time spent on allocation and initialization, leading to overall speedups of 10-25% in heap-intensive programs, with even greater gains—up to 2x or more—in specific benchmarks involving pointer-intensive data structures. This approach also alleviates garbage collection pressure by limiting allocations to the initial pool setup, thereby decreasing CPU cycles dedicated to in garbage-collected environments like or .NET. By avoiding per-use instantiations, it can reduce runtime in multi-threaded tasks. Resource efficiency is another key advantage, as the pattern promotes reuse of pre-allocated objects, resulting in lower CPU and memory usage for repetitive operations. In game engines, object pooling for elements like particle effects or projectiles prevents constant heap allocations during gameplay loops, reducing garbage collection frequency and maintaining smoother frame rates; for example, pre-allocating a fixed pool for avoids generating temporary objects at rates that could reach 60 KB per second at 60 FPS, thus optimizing without sacrificing visual fidelity. Similarly, in server applications, pooling database connections amortizes the high cost of establishing network links across multiple requests, conserving system resources and enabling efficient handling of concurrent clients without proportional increases in overhead. The pattern enhances scalability, particularly for bursty workloads, by spreading creation costs over multiple uses and ensuring objects are readily available. This is evident in memory-constrained or multi-threaded settings, where pools reduce response times and pressure, allowing applications to scale across cores without excessive allocation churn. In I/O-bound operations, such as managing network sockets or file handles, pooling mitigates the expensive setup of these resources, improving throughput in environments like distributed systems or real-time servers by reusing initialized instances rather than incurring repeated initialization delays.

Implementation

Core Mechanics

The object pool pattern employs a centralized container, typically implemented as a queue, list, or concurrent collection such as a ConcurrentBag, to manage a set of reusable objects. This structure maintains a collection of available objects that can be borrowed by clients through an acquire method (e.g., Get or acquire) and returned via a release method (e.g., Return or release). The pool acts as a mediator, ensuring that objects are allocated from the available set without creating new instances unless necessary, thereby promoting efficient resource utilization. Initialization of the pool can occur through pre-allocation, where a fixed number of objects are created upfront and stored in the , or via , where objects are generated on demand using a provided factory function until a predefined maximum is reached. For instance, the pool constructor accepts a generator function to create new objects when the is empty and the allows growth. This approach balances initial setup costs with scalability, preventing unbounded growth while accommodating varying demand. Lifecycle management involves resetting the state of returned objects to a clean, reusable condition, often through an initialization or reset method invoked upon release, to eliminate residual data from prior uses. If an object becomes unusable—due to , expiration, or disposal—it is invalidated and removed from the pool, potentially triggering resource cleanup like calling Dispose for objects implementing IDisposable. Periodic , such as timer-based checks, may further invalid or idle objects to maintain pool health. To support concurrent access in multithreaded environments, thread-safety is achieved through techniques, including the use of concurrent data structures that provide atomic operations for insertion and removal, or explicit locks around and release methods. Built-in implementations, such as those in 's ObjectPool<T>, ensure all operations are inherently thread-safe without requiring additional from clients. This prevents race conditions during borrowing and returning, maintaining the integrity of the pool's state across threads.

Handling Resource Exhaustion

In object pool implementations, exhaustion occurs when a request to acquire an object finds no idle instances available and the pool has reached its . Detection typically happens during the acquire operation, such as the borrowObject method in standard pooling libraries, where the pool checks the number of idle objects and the total checked-out count against configurable limits. If the pool is depleted, the system must decide on an appropriate response to prevent application failure while adhering to resource constraints. Common strategies for handling exhaustion include blocking the requesting thread until an object becomes available, dynamically growing the pool by creating a new object if below the maximum size, or failing the request gracefully by throwing an exception. Blocking can incorporate timeouts to avoid indefinite waits, configurable via parameters like maxWaitMillis, ensuring threads do not hang unnecessarily while promoting fairness among concurrent requesters through algorithms that prioritize waiting order. Growable pools, often the default in flexible implementations, allow expansion up to a predefined limit, balancing demand with preallocated resources, whereas fixed-size pools strictly enforce boundaries by rejecting excess requests. Configuration options play a critical role in tailoring exhaustion handling to application needs, including setting minimum and maximum pool sizes (minIdle and maxTotal), growth policies that dictate when and how many new objects to instantiate, and mechanisms to reclaim idle resources. threads periodically remove unused objects based on idle time thresholds (minEvictableIdleTimeMillis), helping maintain without unbounded growth and mitigating memory pressure in high-load environments. These approaches involve inherent trade-offs, particularly in high-load scenarios where responsiveness must be weighed against resource limits; for instance, aggressive growth enhances but risks exhaustion, while strict blocking improves at the cost of latency spikes. Timeouts and eviction policies help mitigate these by allowing controlled failures or cleanup, ensuring the pool remains sustainable without overcommitting system resources.

Challenges and Considerations

Common Pitfalls

One common pitfall in implementing the object pool pattern is state leakage, where objects returned to the pool are not properly reset, allowing residual data from prior uses to persist and affect subsequent operations. This occurs when developers overlook the need to invoke a reset or cleanup method upon object return, leading to unexpected behavior such as incorrect calculations or security vulnerabilities from leaked sensitive information. For instance, in resource-intensive applications like game engines, failing to clear an object's internal state—such as position data in a —can cause erratic movement in reused instances. Memory leaks represent another frequent issue, arising from inadequate handling of object lifecycle management, such as forgetting to return borrowed objects to the pool or allowing unbounded growth without reclamation. In scenarios where exceptions interrupt normal flow or references to pooled objects escape unintentionally, these objects become unreachable for reuse but remain allocated, gradually exhausting available . This problem is exacerbated in long-running systems, where even infrequent leaks accumulate over time, potentially crashing the application. Proper tracking mechanisms, like , are essential to mitigate this, yet their absence often leads to subtle, hard-to-diagnose failures. In thread-safe object pools, deadlocks can emerge from improper synchronization strategies, particularly when acquiring locks on the pool while invoking external methods that themselves require . A notable example is seen in connection pooling libraries, where an evictor thread holds the pool lock during object creation via a factory that synchronizes on a like a driver manager, causing mutual waiting between threads. This lock-ordering violation results in indefinite blocking, halting application progress and requiring careful design of lock scopes to prevent. Contention from coarse-grained locks can also lead to thread starvation, where some threads repeatedly fail to acquire resources amid high demand. Overuse of the object pool pattern introduces unnecessary complexity when applied to inexpensive-to-create objects, such as lightweight structs or short-lived instances in garbage-collected environments, where the overhead of pool management outweighs creation costs. In modern runtimes like .NET or , automatic handles allocation efficiently, making pools redundant and prone to bugs from added indirection, like manual reset logic. This misuse not only complicates but can degrade through the extra steps of acquire-and-release operations on objects that benefit more from direct instantiation. Developers should reserve pools for truly costly resources, such as database connections, to avoid these pitfalls.

Criticisms

The object pool pattern introduces significant complexity overhead, particularly when applied to lightweight or simple objects, where the added for managing the pool can outweigh the benefits compared to just-in-time creation in modern runtimes. Maintaining custom object pools often clutters the codebase, increases the overall , and can even degrade due to the and management logic required. This overhead is especially pronounced for objects that are inexpensive to instantiate and destroy, making manual pooling unnecessary and counterproductive in many scenarios. Advancements in runtime environments have further diminished the pattern's relevance as a general solution. Framework-level built-in pooling mechanisms, like .NET's ArrayPool, provide standardized ways to reuse arrays and buffers, reducing the need for developers to implement and maintain their own pools for common use cases such as temporary . These improvements, driven by hardware advances and sophisticated , have made the pattern less critical outside of specific high-cost resource scenarios. Despite this, the pattern retains value in targeted applications, such as high-performance game development or managing expensive resources like database connections, where custom pooling can still yield benefits. The pattern has been criticized as an for lightweight objects in contemporary , where efficient garbage collection and allocation mechanisms often suffice.

Practical Examples

Go Implementation

In Go, the object pool pattern is idiomatically implemented using the sync.Pool type from the standard library for managing temporary objects that benefit from reuse to minimize allocation overhead and garbage collection pressure. This type provides thread-safe storage and retrieval, automatically pruning items during garbage collection cycles to balance usage. For scenarios requiring a bounded number of long-lived resources, such as database connections, a buffered channel serves as an efficient, concurrent-safe mechanism to enforce capacity limits and enable non-blocking operations via goroutines. A typical implementation structures the pool as a struct containing a buffered channel of pooled objects, with methods for creation and accessor methods for acquisition and . The NewPool initializes the channel with a fixed capacity and preallocates the objects, ensuring the pool starts at full size. The Get method attempts a non-blocking receive from the channel using a select statement; if the pool is exhausted, it returns an to signal resource unavailability, allowing the caller to handle fallback logic like queuing or rejection. Upon , the Put method attempts to send the object back to the channel, closing invalid objects if the buffer is full to prevent leaks. Object validation, such as checking connection liveness, is performed on acquisition to ensure usability, often involving a ping or health check. The following code outlines a simple connection pool for a hypothetical database client, where Conn represents a reusable database connection (in practice, this could wrap a *sql.Conn or custom driver handle). Error handling for exhaustion integrates with Go's goroutine model for concurrent requests, enabling non-blocking acquires that avoid deadlocks.

go

package main import ( "errors" "fmt" "time" ) // Conn represents a pooled database connection. type Conn struct { // Fields for connection state, e.g., *sql.DB handle or net.Conn. valid bool } // newConn creates a new connection (simulated here). func newConn() *Conn { // In reality, establish a database connection, e.g., via database/sql. return &Conn{valid: true} } // validate checks if the connection is still usable. func (c *Conn) validate() error { if !c.valid { return errors.New("invalid connection") } // Simulate a ping or health check. time.Sleep(1 * time.Millisecond) // Placeholder for actual validation. return nil } // close releases the connection. func (c *Conn) close() { c.valid = false // In reality, close the underlying resource. } // Pool manages a fixed-size pool of connections using a buffered channel. type Pool struct { conns chan *Conn } // NewPool creates a pool with the given size. func NewPool(size int) *Pool { p := &Pool{ conns: make(chan *Conn, size), } for i := 0; i < size; i++ { p.conns <- newConn() } return p } // Get acquires a connection from the pool, non-blocking. func (p *Pool) Get() (*Conn, error) { select { case conn := <-p.conns: if err := conn.validate(); err != nil { conn.close() return p.Get() // Retry once, or handle differently. } return conn, nil default: return nil, errors.New("pool exhausted") } } // Put returns a connection to the pool. func (p *Pool) Put(conn *Conn) { select { case p.conns <- conn: // Successfully returned. default: conn.close() // Discard if full. } } func main() { pool := NewPool(5) // Fixed size of 5 connections. // Simulate concurrent use in a web server handler goroutine. for i := 0; i < 10; i++ { go func(id int) { conn, err := pool.Get() if err != nil { fmt.Printf("Request %d: %v\n", id, err) return // Handle exhaustion, e.g., queue or 503 error. } defer pool.Put(conn) // Use conn for database query. fmt.Printf("Request %d: using connection\n", id) time.Sleep(100 * time.Millisecond) // Simulate work. }(i) } time.Sleep(1 * time.Second) // Wait for goroutines. }

package main import ( "errors" "fmt" "time" ) // Conn represents a pooled database connection. type Conn struct { // Fields for connection state, e.g., *sql.DB handle or net.Conn. valid bool } // newConn creates a new connection (simulated here). func newConn() *Conn { // In reality, establish a database connection, e.g., via database/sql. return &Conn{valid: true} } // validate checks if the connection is still usable. func (c *Conn) validate() error { if !c.valid { return errors.New("invalid connection") } // Simulate a ping or health check. time.Sleep(1 * time.Millisecond) // Placeholder for actual validation. return nil } // close releases the connection. func (c *Conn) close() { c.valid = false // In reality, close the underlying resource. } // Pool manages a fixed-size pool of connections using a buffered channel. type Pool struct { conns chan *Conn } // NewPool creates a pool with the given size. func NewPool(size int) *Pool { p := &Pool{ conns: make(chan *Conn, size), } for i := 0; i < size; i++ { p.conns <- newConn() } return p } // Get acquires a connection from the pool, non-blocking. func (p *Pool) Get() (*Conn, error) { select { case conn := <-p.conns: if err := conn.validate(); err != nil { conn.close() return p.Get() // Retry once, or handle differently. } return conn, nil default: return nil, errors.New("pool exhausted") } } // Put returns a connection to the pool. func (p *Pool) Put(conn *Conn) { select { case p.conns <- conn: // Successfully returned. default: conn.close() // Discard if full. } } func main() { pool := NewPool(5) // Fixed size of 5 connections. // Simulate concurrent use in a web server handler goroutine. for i := 0; i < 10; i++ { go func(id int) { conn, err := pool.Get() if err != nil { fmt.Printf("Request %d: %v\n", id, err) return // Handle exhaustion, e.g., queue or 503 error. } defer pool.Put(conn) // Use conn for database query. fmt.Printf("Request %d: using connection\n", id) time.Sleep(100 * time.Millisecond) // Simulate work. }(i) } time.Sleep(1 * time.Second) // Wait for goroutines. }

This example demonstrates pooling database connections in a web server context, where multiple goroutines handle incoming requests concurrently without exceeding the pool size, thus preventing database overload. The non-blocking Get leverages Go's select for efficient concurrency, allowing requests to proceed or fail gracefully under load.

C# Implementation

In C#, the object pool pattern is commonly implemented using the ObjectPool<T> class from the Microsoft.Extensions.ObjectPool namespace, which is part of the .NET ecosystem and provides a thread-safe mechanism for reusing objects to minimize allocation overhead. This package is integrated into and other .NET applications via , allowing developers to configure pools with custom creation and reset policies. For simpler or custom scenarios, developers can build pools using ConcurrentQueue<T> or ConcurrentBag<T> from System.Collections.Concurrent to ensure thread-safety in multi-threaded environments. A typical implementation involves a pool manager class that exposes methods like Create for initialization, Rent to acquire an object, and Return to release it back to the pool. The ObjectPool<T> class follows this structure, where Rent returns an object (creating a new one if the pool is empty) and Return resets and requeues it for reuse. To integrate with C#'s resource management, objects in the pool often implement IDisposable, enabling automatic release via using statements or try-finally blocks, which call the pool's return logic upon disposal. Here's a basic example of a custom pool using ConcurrentQueue<T> for a StringBuilder instance, illustrating the core methods. To enable automatic return with using, a wrapper implementing IDisposable is used:

csharp

using System; using System.Collections.Concurrent; using System.Text; public class StringBuilderPool { private readonly ConcurrentQueue<StringBuilder> _pool = new(); public StringBuilder Rent() { if (_pool.TryDequeue(out var sb)) { return sb; } return new StringBuilder(); } public void Return(StringBuilder sb) { sb.Clear(); // Reset for reuse _pool.Enqueue(sb); } } public class PooledStringBuilder : IDisposable { private readonly StringBuilderPool _pool; public StringBuilder Builder { get; } public PooledStringBuilder(StringBuilderPool pool) { _pool = pool; Builder = pool.Rent(); } public void Dispose() { _pool.Return(Builder); } } // Usage var pool = new StringBuilderPool(); using var psb = new PooledStringBuilder(pool); psb.Builder.Append("Hello, World!");

using System; using System.Collections.Concurrent; using System.Text; public class StringBuilderPool { private readonly ConcurrentQueue<StringBuilder> _pool = new(); public StringBuilder Rent() { if (_pool.TryDequeue(out var sb)) { return sb; } return new StringBuilder(); } public void Return(StringBuilder sb) { sb.Clear(); // Reset for reuse _pool.Enqueue(sb); } } public class PooledStringBuilder : IDisposable { private readonly StringBuilderPool _pool; public StringBuilder Builder { get; } public PooledStringBuilder(StringBuilderPool pool) { _pool = pool; Builder = pool.Rent(); } public void Dispose() { _pool.Return(Builder); } } // Usage var pool = new StringBuilderPool(); using var psb = new PooledStringBuilder(pool); psb.Builder.Append("Hello, World!");

For more advanced usage, ObjectPool<T> supports a policy-based approach where a PooledObjectPolicy<T> defines creation (Create) and reset (Return) behaviors, ensuring objects are properly sanitized before reuse. A practical example in the .NET ecosystem is pooling HttpClient instances to avoid socket exhaustion and DNS resolution overhead, managed through IHttpClientFactory which leverages ObjectPool<T> internally for resilient HTTP requests. This factory creates named or typed clients that are rented asynchronously, supporting non-blocking operations with async/await. In an ASP.NET Core application, registration occurs in Program.cs or Startup.cs:

csharp

using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.ObjectPool; using System.Net.Http; var builder = WebApplication.CreateBuilder(args); builder.Services.AddHttpClient(); // Enables IHttpClientFactory with default pooling var app = builder.Build(); // Usage in a service public class MyService { private readonly IHttpClientFactory _factory; public MyService(IHttpClientFactory factory) => _factory = factory; public async Task<string> GetDataAsync() { using var client = _factory.CreateClient(); // Rents from pool var response = await client.GetAsync("https://api.example.com/data"); return await response.Content.ReadAsStringAsync(); } }

using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.ObjectPool; using System.Net.Http; var builder = WebApplication.CreateBuilder(args); builder.Services.AddHttpClient(); // Enables IHttpClientFactory with default pooling var app = builder.Build(); // Usage in a service public class MyService { private readonly IHttpClientFactory _factory; public MyService(IHttpClientFactory factory) => _factory = factory; public async Task<string> GetDataAsync() { using var client = _factory.CreateClient(); // Rents from pool var response = await client.GetAsync("https://api.example.com/data"); return await response.Content.ReadAsStringAsync(); } }

This approach pools up to 100 HttpMessageHandler instances by default, recycling them across requests to improve throughput in high-concurrency scenarios. Best practices for C# object pools include using methods for object creation to encapsulate initialization logic and disposal to handle cleanup, such as closing connections or clearing state. For thread-safety, prefer ConcurrentQueue<T> over Queue<T> in concurrent applications, and configure pool sizes based on workload to balance memory usage and performance— recommends starting with a minimum of 1 and maximum of 1024 for most pools. Always implement reset operations to prevent data leakage between uses, and monitor pool metrics via diagnostics for tuning.

Java Implementation

In , the object pool pattern leverages the language's concurrent utilities for thread-safe object management, such as the ArrayBlockingQueue class from the java.util.concurrent package, which provides a bounded, blocking queue implementation suitable for maintaining a fixed-size pool of reusable objects. This approach ensures atomic operations for borrowing and returning objects without explicit synchronization in many cases, as the queue handles internal locking. For more advanced scenarios, the Pool library offers a configurable GenericObjectPool class that supports features like idle object eviction, validation, and abandonment tracking, making it ideal for production environments requiring robust pooling. A basic generic object pool can be implemented using ArrayBlockingQueue to store and dispense objects, with optional validation to ensure returned objects are in a usable state. The pool preallocates objects via a supplier function and uses the queue's blocking methods for thread-safe acquisition and release. For validation, pooled objects may override equals and hashCode methods to facilitate equality checks during reuse, preventing duplicates or invalid states in the pool.

java

import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.function.Supplier; public class ObjectPool<T> { private final BlockingQueue<T> pool; private final Supplier<T> factory; public ObjectPool(int capacity, Supplier<T> factory) { this.pool = new ArrayBlockingQueue<>(capacity); this.factory = factory; for (int i = 0; i < capacity; i++) { pool.offer(factory.get()); } } public T borrow() throws InterruptedException { T obj = pool.take(); // Optional validation: check if obj is valid before returning if (!isValid(obj)) { // Discard invalid object (assume close/destroy method on T if available) // e.g., obj.close(); or policy.destroy(obj); obj = factory.get(); // Create new if invalid } return obj; } public void returnToPool(T obj) { if (obj != null && isValid(obj)) { pool.offer(obj); } else { // Discard invalid objects } } private boolean isValid(T obj) { // Custom validation logic, e.g., using equals/hashCode for state checks return obj != null && obj.equals(obj); // Placeholder; implement based on T } }

import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.function.Supplier; public class ObjectPool<T> { private final BlockingQueue<T> pool; private final Supplier<T> factory; public ObjectPool(int capacity, Supplier<T> factory) { this.pool = new ArrayBlockingQueue<>(capacity); this.factory = factory; for (int i = 0; i < capacity; i++) { pool.offer(factory.get()); } } public T borrow() throws InterruptedException { T obj = pool.take(); // Optional validation: check if obj is valid before returning if (!isValid(obj)) { // Discard invalid object (assume close/destroy method on T if available) // e.g., obj.close(); or policy.destroy(obj); obj = factory.get(); // Create new if invalid } return obj; } public void returnToPool(T obj) { if (obj != null && isValid(obj)) { pool.offer(obj); } else { // Discard invalid objects } } private boolean isValid(T obj) { // Custom validation logic, e.g., using equals/hashCode for state checks return obj != null && obj.equals(obj); // Placeholder; implement based on T } }

This implementation draws from standard concurrent queue usage in Java, where take() blocks until an object is available and offer() non-blockingly adds returned objects. A common use case for the object pool pattern in Java is pooling JDBC connections to databases, which are resource-intensive to create due to network overhead. The Tomcat JDBC Connection Pool, for instance, manages a pool of java.sql.Connection objects with configurable limits on active and idle connections, integrating seamlessly with Java's try-with-resources statement to ensure connections are automatically returned to the pool upon use.

java

import org.apache.tomcat.jdbc.pool.[DataSource](/page/Datasource); import org.apache.tomcat.jdbc.pool.PoolProperties; // Configuration example PoolProperties p = new PoolProperties(); p.setUrl("jdbc:mysql://[localhost](/page/Localhost):3306/test"); p.setDriverClassName("com.[mysql](/page/MySQL).cj.[jdbc](/page/JDBC_driver).Driver"); p.setMaxActive(100); // Maximum connections in pool [DataSource](/page/Datasource) datasource = new [DataSource](/page/Datasource)(); datasource.setPoolProperties(p); // Usage with try-with-resources try ([java](/page/Java).sql.Connection con = datasource.getConnection()) { [java](/page/Java).sql.Statement st = con.createStatement(); [java](/page/Java).sql.ResultSet rs = st.executeQuery("SELECT * FROM user"); // Process results } // Connection automatically returned to pool

import org.apache.tomcat.jdbc.pool.[DataSource](/page/Datasource); import org.apache.tomcat.jdbc.pool.PoolProperties; // Configuration example PoolProperties p = new PoolProperties(); p.setUrl("jdbc:mysql://[localhost](/page/Localhost):3306/test"); p.setDriverClassName("com.[mysql](/page/MySQL).cj.[jdbc](/page/JDBC_driver).Driver"); p.setMaxActive(100); // Maximum connections in pool [DataSource](/page/Datasource) datasource = new [DataSource](/page/Datasource)(); datasource.setPoolProperties(p); // Usage with try-with-resources try ([java](/page/Java).sql.Connection con = datasource.getConnection()) { [java](/page/Java).sql.Statement st = con.createStatement(); [java](/page/Java).sql.ResultSet rs = st.executeQuery("SELECT * FROM user"); // Process results } // Connection automatically returned to pool

This setup reuses connections efficiently, reducing latency in database-intensive applications. In the JVM environment, object pooling requires attention to serialization for objects that may need to be persisted or transferred, typically by implementing the java.io.Serializable interface to maintain pool integrity across sessions. Reflection can be utilized in pool factories, such as Apache Commons Pool's PooledObjectFactory, to dynamically inspect and validate object states during creation or reuse without hardcoded type knowledge.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.