Skip to content

synk/pool

A pool of bytes buffers that are reused instead of allocated and freed.

Usage

go
sizedPool := pool.GetSizedPool()
unsizedPool := pool.GetUnsizedPool()

// as []byte
buf := sizedPool.GetSized(size)
defer sizedPool.Put(buf)

buf = unsizedPool.Get()
defer unsizedPool.Put(buf)

// as *bytes.Buffer
buf := unsizedPool.GetBuffer()
defer unsizedPool.PutBuffer(buf)

buf := sizedPool.GetSizedBuffer(size)
defer sizedPool.PutSizedBuffer(buf)

Design Philosophy

This pool implementation follows a tiered memory management strategy with several key goals:

  • Minimize allocations and reduce GC pressure - Reuse buffers instead of allocating new ones
  • Reduce memory waste - Match buffer sizes to actual needs through tiered sizing
  • Prevent memory leaks - Use weak references to allow GC cleanup
  • High performance - Use lock-free channels and unsafe optimizations

Core Architecture

Dual Pool System

UnsizedBytesPool

  • Single pool for general-purpose buffers
  • All buffers start at MinAllocSize (4KB)
  • Good for variable-size use cases

SizedBytesPool

  • 11 tiered pools: 4KB, 8KB, 16KB, 32KB, 64KB, 128KB, 256KB, 512KB, 1MB, 2MB, 4MB
  • Plus a large pool for buffers >4MB (very large buffers are rare)
  • Reduces memory waste by size-matching

Weak Reference Mechanism

The pool uses weak.Pointer[[]byte]

go
type weakBuf = weak.Pointer[[]byte]

func makeWeak(b *[]byte) weakBuf {
    return weak.Make(b)
}

Goal:

If the GC needs memory and no strong references exist to a buffer, it can be collected even though the buffer is still in the pool channel. This prevents memory leaks when pools are underutilized.

Sized Pool Allocation Strategy

Pool Index Calculation

go
func (p *SizedBytesPool) poolIdx(size int) int {
    if size <= 0 {
        return 0
    }
    return min(SizedPools-1, max(0, bits.Len(uint(size-1))-11))
}

This uses bit manipulation to find the appropriate tier: bits.Len(size-1) finds the position of the highest set bit Subtract 11 to align with 4KB (2^12) base size Maps sizes efficiently to the nearest power-of-2 tier

Smart Buffer Splitting

When a larger buffer is retrieved but only part is needed, the excess is returned to the pool:

go
remainingSize := capB - size
if remainingSize > p.min { // only split if remainder is useful
    p.put(b[size:], true)  // return excess to pool
    front := b[:size:size]  // use requested portion
    storeFullCap(front, capB)  // remember original capacity
    return front
}

Capacity Restoration System

The pool maintains full capacity information for buffers that have been sliced:

go
func storeFullCap(b []byte, c int) {
    if c == cap(b) {
        return  // no change needed
    }
    ptr := sliceStruct(&b).ptr
    sizedFullCaps.Store(ptr, c)  // store original capacity
}

func restoreFullCap(b *[]byte) {
    ptr := sliceStruct(b).ptr
    if fullCap, ok := sizedFullCaps.LoadAndDelete(ptr); ok {
        setCap(b, fullCap)  // restore original capacity
    }
}

This uses unsafe pointer manipulation to preserve the original capacity across slice operations.

Performance Optimizations

Channel Sizing

go
func poolChannelSize(idx int) int {
    return max(8, 256>>uint(idx))
}

Smaller buffers (used more frequently) get larger channels to reduce contention, while larger buffers get smaller channels to save memory.

Put() operations

go
func (p UnsizedBytesPool) Put(b []byte) {
    if b == nil {
        return
    }
    put(b, p.pool)
}

Lock-free Operations

All pool operations use channel selects instead of mutexes, enabling concurrent access without blocking.

Usage Patterns

Unsized Pool

Unsized Pool: Good for when buffer sizes vary unpredictably (e.g. io.Reader or io.Writer).

go
var reader io.Reader = ...
buf := unsizedBytesPool.GetBuffer() // returns a *bytes.Buffer
defer unsizedBytesPool.PutBuffer(buf)

_, err := io.Copy(buf, reader)
if err != nil {
    return err
}

Sized Pool

Sized Pool: Good for when buffer sizes are known and predictable (e.g. HTTP responses).

go
var reader io.Reader = ...
bytes := sizedBytesPool.GetSized(size) // returns a []byte
defer sizedBytesPool.Put(bytes)

_, err := io.ReadFull(reader, bytes)
if err != nil {
    return err
}

Benchmarks

Benchmarks might not be exhaustive, but they are representative of the performance of the pool.

Randomly sized buffers within 4MB

BenchmarkIterationsns/opB/opallocs/op
GetAll/unsized2,236,105555.9342
GetAll/sized842,4881,425904
GetAll/make6,498194,0621,039,8981

Randomly sized buffers (may exceed 4MB)

BenchmarkIterationsns/opB/opallocs/op
GetAllExceedsMax/unsized2,203,759544.3372
GetAllExceedsMax/sized1,312,588941.9723
GetAllExceedsMax/make3,937336,1132,126,7431

Concurrent allocations

BenchmarkIterationsns/opns/op_allocns/op_totalns/op_workB/opallocs/op
workers-1-unsized3,978294,8751,302293,926292,6243,4083
workers-1-sized4,286296,3341,237295,218293,9812,9305
workers-1-make3,340415,52397,299415,409318,110492,2392
workers-2-unsized8,292147,362830.0294,031293,2013,2692
workers-2-sized8,336148,1851,176295,486294,3102,4365
workers-2-make5,412219,52193,075440,315347,240493,1412
workers-4-unsized15,85374,6671,378298,151296,7733,4172
workers-4-sized16,07575,1702,202299,441297,2396,7675
workers-4-make9,884115,85481,081462,411381,330486,5251
workers-8-unsized28,24442,1621,735336,454334,7193,8302
workers-8-sized26,01346,2242,310368,261365,9514,9325
workers-8-make16,42071,518108,927570,788461,861496,6451
workers-16-unsized43,27229,1893,696463,896460,2004,9832
workers-16-sized36,67034,6416,803548,070541,2677,3825
workers-16-make19,16665,258299,7531,011,464711,711489,2292
workers-32-unsized32,05938,42319,663749,792730,12913,2292
workers-32-sized35,07137,11623,897726,144702,24715,3785
workers-32-make19,43763,402578,6801,356,542777,862466,3362

Released under the MIT License.