slice.go - 理解Go的切片容器

阅读Go源码,理解内置切片(slice)容器的数据结构与算法原理。

slice.go - 理解Go的切片容器

Slice的实现位于go.go,总共仅318行。

本文以目前Go源码最新的1.17.2版本为例。

数据结构

1
2
3
4
5
type slice struct {
array unsafe.Pointer
len int
cap int
}

slice的数据结构并不复杂,本质上是对array的一层封装,类似Java中的ArrayList。

slice底层数据由array存储,由len标记当前实际存储的元素数量,cap标记当前array指针指向的内存对象的元素容量。

算法

构造(makeslice)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func makeslice(et *_type, len, cap int) unsafe.Pointer {
mem, overflow := math.MulUintptr(et.size, uintptr(cap))
if overflow || mem > maxAlloc || len < 0 || len > cap {
// NOTE: Produce a 'len out of range' error instead of a
// 'cap out of range' error when someone does make([]T, bignumber).
// 'cap out of range' is true too, but since the cap is only being
// supplied implicitly, saying len is clearer.
// See golang.org/issue/4085.
mem, overflow := math.MulUintptr(et.size, uintptr(len))
if overflow || mem > maxAlloc || len < 0 {
panicmakeslicelen()
}
panicmakeslicecap()
}

return mallocgc(mem, et, true)
}

构造过程输入et,即ElementType的缩写,用于记录slice中存储的元素类型、

首先,通过math.MulUintptr函数实现带溢出检测的uintptr类型乘法。

https://pkg.go.dev/runtime/internal/math#MulUintptr

https://cs.opensource.google/go/go/+/go1.17.2:src/runtime/internal/math/math.go;l=13

math.MulUintptr函数的实现挺巧妙的,此处暂不深究

随后,根据计算出的内存长度,通过mallocgc函数(位于go.go中,基于TCMalloc机制实现)分配相应的内存对象。

扩容(growslice)

slice能够在append时自动扩容。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
// growslice handles slice growth during append.
// It is passed the slice element type, the old slice, and the desired new minimum capacity,
// and it returns a new slice with at least that capacity, with the old data
// copied into it.
// The new slice's length is set to the old slice's length,
// NOT to the new requested capacity.
// This is for codegen convenience. The old slice's length is used immediately
// to calculate where to write new values during an append.
// TODO: When the old backend is gone, reconsider this decision.
// The SSA backend might prefer the new length or to return only ptr/cap and save stack space.
func growslice(et *_type, old slice, cap int) slice {
if raceenabled {
callerpc := getcallerpc()
racereadrangepc(old.array, uintptr(old.len*int(et.size)), callerpc, funcPC(growslice))
}
if msanenabled {
msanread(old.array, uintptr(old.len*int(et.size)))
}

if cap < old.cap {
panic(errorString("growslice: cap out of range"))
}

if et.size == 0 {
// append should not create a slice with nil pointer but non-zero len.
// We assume that append doesn't need to preserve old.array in this case.
return slice{unsafe.Pointer(&zerobase), old.len, cap}
}

newcap := old.cap
doublecap := newcap + newcap
if cap > doublecap {
newcap = cap
} else {
if old.cap < 1024 {
newcap = doublecap
} else {
// Check 0 < newcap to detect overflow
// and prevent an infinite loop.
for 0 < newcap && newcap < cap {
newcap += newcap / 4
}
// Set newcap to the requested cap when
// the newcap calculation overflowed.
if newcap <= 0 {
newcap = cap
}
}
}

var overflow bool
var lenmem, newlenmem, capmem uintptr
// Specialize for common values of et.size.
// For 1 we don't need any division/multiplication.
// For sys.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
// For powers of 2, use a variable shift.
switch {
case et.size == 1:
lenmem = uintptr(old.len)
newlenmem = uintptr(cap)
capmem = roundupsize(uintptr(newcap))
overflow = uintptr(newcap) > maxAlloc
newcap = int(capmem)
case et.size == sys.PtrSize:
lenmem = uintptr(old.len) * sys.PtrSize
newlenmem = uintptr(cap) * sys.PtrSize
capmem = roundupsize(uintptr(newcap) * sys.PtrSize)
overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
newcap = int(capmem / sys.PtrSize)
case isPowerOfTwo(et.size):
var shift uintptr
if sys.PtrSize == 8 {
// Mask shift for better code generation.
shift = uintptr(sys.Ctz64(uint64(et.size))) & 63
} else {
shift = uintptr(sys.Ctz32(uint32(et.size))) & 31
}
lenmem = uintptr(old.len) << shift
newlenmem = uintptr(cap) << shift
capmem = roundupsize(uintptr(newcap) << shift)
overflow = uintptr(newcap) > (maxAlloc >> shift)
newcap = int(capmem >> shift)
default:
lenmem = uintptr(old.len) * et.size
newlenmem = uintptr(cap) * et.size
capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
capmem = roundupsize(capmem)
newcap = int(capmem / et.size)
}

// The check of overflow in addition to capmem > maxAlloc is needed
// to prevent an overflow which can be used to trigger a segfault
// on 32bit architectures with this example program:
//
// type T [1<<27 + 1]int64
//
// var d T
// var s []T
//
// func main() {
// s = append(s, d, d, d, d)
// print(len(s), "\n")
// }
if overflow || capmem > maxAlloc {
panic(errorString("growslice: cap out of range"))
}

var p unsafe.Pointer
if et.ptrdata == 0 {
p = mallocgc(capmem, nil, false)
// The append() that calls growslice is going to overwrite from old.len to cap (which will be the new length).
// Only clear the part that will not be overwritten.
memclrNoHeapPointers(add(p, newlenmem), capmem-newlenmem)
} else {
// Note: can't use rawmem (which avoids zeroing of memory), because then GC can scan uninitialized memory.
p = mallocgc(capmem, et, true)
if lenmem > 0 && writeBarrier.enabled {
// Only shade the pointers in old.array since we know the destination slice p
// only contains nil pointers because it has been cleared during alloc.
bulkBarrierPreWriteSrcOnly(uintptr(p), uintptr(old.array), lenmem-et.size+et.ptrdata)
}
}
memmove(p, old.array, lenmem)

return slice{p, old.len, newcap}
}

在扩容时,如果新容量已经超过现有容量的两倍,则以更大的新容量为准。

如果指定的新容量不足两倍,则分两种情况:

  1. 如果现有容量较小(<1024),那就直接容量翻倍2x(直接成倍增长的策略有助于避免频繁扩容,而容量较小时,即使有空间冗余浪费,也是比较少的);
  2. 如果现有容量不小了(>=1024),此时翻倍式扩容可能会浪费较多的内存,因此以1.25x渐进式增长至不低于目标容量,既满足目标容量,又避免浪费内存。

runtime: make slice growth formula a bit smoother

不过值得注意的是,这样的扩容算法未必是最优的,仍然存在改进的研究空间。从master分支上最新commit中可以看到,新的commit正在尝试更平滑的扩容函数(及参数)。高的增长倍率,一方面有助于避免频繁扩容(避免分配内存时潜在的系统调用代价),另一方面也更容易造成内存冗余。

此后,计算新slice的array所需的内存容量capmen和相应的元素容量newcap。(该计算过程针对元素尺寸做了优化)

最后,通过mallocgc函数申请capmem尺寸的内存对象,并且用memmove函数将原slice数据拷贝到新slice的内存(指针p)中。