Java多线程
Java并发
JUC集合

Java多线程 30 - ConcurrentHashMap(JDK1.7)详解(1)

简介:ConcurrentHashMap是线程安全的哈希表,用法与HashMap或Hashtable类似。

1. ConcurrentHashMap简介

注:本文讲解的ConcurrentHashMap基于JDK 1.7.0_07。

ConcurrentHashMap是线程安全的哈希表,用法与HashMap或Hashtable类似。HashMap、Hashtable、ConcurrentHashMap之间的对比如下:

  • HashMap是非线程安全的哈希表,常用于单线程程序中。
  • Hashtable是线程安全的哈希表,它是通过synchronized来保证线程安全的,多线程通过同一个对象的同步锁来实现并发控制。Hashtable在线程竞争激烈时效率比较低,因为当一个线程访问Hashtable的同步方法时,其它线程同时访问该同步方法可能会进入阻塞状态。
  • ConcurrentHashMap是线程安全的哈希表,它是通过锁分段来保证线程安全的。ConcurrentHashMap将哈希表分成许多片段(即Segment类),每一个片段除了保存哈希表之外,本质上也是一个可重入的互斥锁(以ReentrantLock实现)。多线程对同一个片段的访问是互斥的;但对于不同片段的访问却是可以同步进行的。

关于HashMap、Hashtable以及ReentrantLock的更多内容,可以参考以下文章:

  1. Java集合10 - HashMap详细介绍(JDK1.6)
  2. Java集合12 - Hashtable详细介绍
  3. Java多线程 16 - ReentrantLock互斥锁

我们先了解ConcurrentHashMap的数据结构

  1. ConcurrentHashMap继承于AbstractMap抽象类。
  2. Segment是ConcurrentHashMap中的内部类,它就是ConcurrentHashMap中的锁分段对应的存储结构。ConcurrentHashMap与Segment是组合关系,一个ConcurrentHashMap对象包含若干个Segment对象。在代码中表现为ConcurrentHashMap类中存在Segment数组成员。
  3. Segment类继承于ReentrantLock类,所以Segment本质上是一个可重入的互斥锁。
  4. HashEntry也是ConcurrentHashMap的内部类,是单向链表节点,存储着键值对。Segment与HashEntry是组合关系,Segment类中拥有一个HashEntry数组,HashEntry数组中的每个HashEntry就是一个单向链表的头节点。

对于多线程访问对一个哈希表对象竞争资源,Hashtable是通过一把锁来控制并发,而ConcurrentHashMap则是将哈希表分成许多片段,对于每一个片段分别通过一个互斥锁来控制并发。ConcurrentHashMap对并发的控制更加精细,它也更加适应于高并发场景。

ConcurrentHashMap函数列表如下:

  • // 创建一个带有默认初始容量(16)、加载因子(0.75)和concurrencyLevel(16)的新的空映射
  • ConcurrentHashMap()
  • // 创建一个带有指定初始容量、默认加载因子(0.75)和concurrencyLevel(16)的新的空映射
  • ConcurrentHashMap(int initialCapacity)
  • // 创建一个带有指定初始容量、加载因子和默认concurrencyLevel(16)的新的空映射
  • ConcurrentHashMap(int initialCapacity, float loadFactor)
  • // 构造一个与给定映射具有相同映射关系的新映射
  • ConcurrentHashMap(Map<? extends K, ? extends V> m)
  • // 创建一个带有指定初始容量、加载因子和并发级别的新的空映射
  • ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel)
  • // 从该映射中移除所有映射关系
  • void clear()
  • // 一种遗留方法,测试此表中是否有一些与指定值存在映射关系的键
  • boolean contains(Object value)
  • // 测试指定对象是否为此表中的键
  • boolean containsKey(Object key)
  • // 如果此映射将一个或多个键映射到指定值,则返回true
  • boolean containsValue(Object value)
  • // 返回此表中值的枚举
  • Enumeration<V> elements()
  • // 返回此映射所包含的映射关系的Set视图
  • Set<Map.Entry<K, V>> entrySet()
  • // 返回指定键所映射到的值,如果此映射不包含该键的映射关系,则返回null
  • V get(Object key)
  • // 如果此映射不包含键-值映射关系,则返回true
  • boolean isEmpty()
  • // 返回此表中键的枚举
  • Enumeration<K> keys()
  • // 返回此映射中包含的键的Set视图
  • Set<K> keySet()
  • // 将指定键映射到此表中的指定值
  • V put(K key, V value)
  • // 将指定映射中所有映射关系复制到此映射中
  • void putAll(Map<? extends K, ? extends V> m)
  • // 如果指定键已经不再与某个值相关联,则将它与给定值关联
  • V putIfAbsent(K key, V value)
  • // 从此映射中移除键(及其相应的值)
  • V remove(Object key)
  • // 只有目前将键的条目映射到给定值时,才移除该键的条目
  • boolean remove(Object key, Object value)
  • // 只有目前将键的条目映射到某一值时,才替换该键的条目
  • V replace(K key, V value)
  • // 只有目前将键的条目映射到给定值时,才替换该键的条目
  • boolean replace(K key, V oldValue, V newValue)
  • // 返回此映射中的键-值映射关系数
  • int size()
  • // 返回此映射中包含的值的Collection视图
  • Collection<V> values()

2. ConcurrentHashMap示例

我们首先通过一个示例来了解ConcurrentHashMap的使用方法,代码如下:

  • package com.coderap.collection;
  • import java.util.HashMap;
  • import java.util.Map;
  • import java.util.concurrent.ConcurrentHashMap;
  • public class ConcurrentHashMapTest {
  • // 循环次数
  • private final static int loopCount = 5;
  • // 操作的HashMap和ConcurrentHashMap
  • private final static Map<String, String> hashMap = new HashMap<>();
  • private final static ConcurrentHashMap<String, String> concurrentHashMap = new ConcurrentHashMap();
  • public static void main(String[] args) {
  • // 开启两个线程同时操作
  • new Thread(new ConcurrentHashMapTest.OperateThread(concurrentHashMap, loopCount), "T-1").start();
  • new Thread(new ConcurrentHashMapTest.OperateThread(concurrentHashMap, loopCount), "T-2").start();
  • //new Thread(new ConcurrentHashMapTest.OperateThread(hashMap, loopCount), "T-3").start();
  • //new Thread(new ConcurrentHashMapTest.OperateThread(hashMap, loopCount), "T-4").start();
  • }
  • // 遍历方法
  • private static void iterate(Map<String, String> map) {
  • StringBuilder stringBuilder = new StringBuilder();
  • for (Map.Entry<String, String> entry : map.entrySet()) {
  • stringBuilder.append(entry.getKey() + " => " + entry.getValue());
  • stringBuilder.append(", ");
  • }
  • // 删除最后多余的字符
  • stringBuilder.delete(stringBuilder.length() - 2, stringBuilder.length() - 1);
  • // 打印
  • System.out.println(Thread.currentThread().getName() + " iterate: " + stringBuilder.toString());
  • }
  • private static class OperateThread implements Runnable {
  • private Map map;
  • private int loopCount;
  • public OperateThread(Map map, int loopCount) {
  • this.map = map;
  • this.loopCount = loopCount;
  • }
  • @Override
  • public void run() {
  • // 循环添加并遍历打印
  • while (loopCount >= 0) {
  • map.put(Thread.currentThread().getName() + " - " + loopCount, "" + loopCount);
  • iterate(map);
  • loopCount--;
  • }
  • }
  • }
  • }

示例代码非常简单,其实与之前的CopyOnWriteArrayList示例比较类似,都是使用使用两个线程同时操作;某一次运行结果如下:

  • T-1 iterate: T-1 - 5 => 5
  • T-1 iterate: T-1 - 5 => 5, T-1 - 4 => 4
  • T-1 iterate: T-1 - 5 => 5, T-1 - 4 => 4, T-1 - 3 => 3
  • T-1 iterate: T-1 - 2 => 2, T-1 - 5 => 5, T-1 - 4 => 4, T-1 - 3 => 3
  • T-1 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 5 => 5, T-1 - 4 => 4, T-1 - 3 => 3
  • T-1 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-1 - 5 => 5, T-1 - 4 => 4, T-1 - 3 => 3
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-1 - 5 => 5, T-1 - 4 => 4, T-2 - 5 => 5, T-1 - 3 => 3
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-1 - 5 => 5, T-1 - 4 => 4, T-2 - 5 => 5, T-1 - 3 => 3, T-2 - 4 => 4
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-2 - 3 => 3, T-1 - 5 => 5, T-1 - 4 => 4, T-2 - 5 => 5, T-1 - 3 => 3, T-2 - 4 => 4
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-2 - 3 => 3, T-1 - 5 => 5, T-2 - 2 => 2, T-1 - 4 => 4, T-2 - 5 => 5, T-1 - 3 => 3, T-2 - 4 => 4
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-2 - 1 => 1, T-2 - 3 => 3, T-1 - 5 => 5, T-2 - 2 => 2, T-1 - 4 => 4, T-2 - 5 => 5, T-1 - 3 => 3, T-2 - 4 => 4
  • T-2 iterate: T-1 - 2 => 2, T-1 - 1 => 1, T-1 - 0 => 0, T-1 - 5 => 5, T-1 - 4 => 4, T-1 - 3 => 3, T-2 - 1 => 1, T-2 - 0 => 0, T-2 - 3 => 3, T-2 - 2 => 2, T-2 - 5 => 5, T-2 - 4 => 4

同样的,如果将源码中的ConcurrentHashMap类型Map改成HashMap类型Map时,程序可能会产生ConcurrentModificationException异常。

3. ConcurrentHashMap的基础设计

从前面的简介中我们了解到,ConcurrentHashMap类中定义了Segment和HashEntry两个内部类,而ConcurrentHashMap类则拥有一个Segment数组类型的成员变量,每个Segment类又拥有一个HashEntry数组类型的成员变量;下面我们先了解一下Segment和HashEntry两个类的定义,再结合ConcurrentHashMap的相关内容来推断ConcurrentHashMap的数据结构。

3.1. HashEntry类的设计

我们先查看HashEntry类的源码如下,非常简单:

  • static final class HashEntry<K, V> {
  • // hash值
  • final int hash;
  • // 键
  • final K key;
  • // 值,volatile修饰,实现线程可见性
  • volatile V value;
  • // 下一个HashEntry,构成链表
  • volatile HashEntry<K, V> next;
  • // 构造方法
  • HashEntry(int hash, K key, V value, HashEntry<K, V> next) {
  • this.hash = hash;
  • this.key = key;
  • this.value = value;
  • this.next = next;
  • }
  • // Unsafe方式设置后继HashEntry
  • final void setNext(HashEntry<K, V> n) {
  • UNSAFE.putOrderedObject(this, nextOffset, n);
  • }
  • // Unsafe相关
  • static final sun.misc.Unsafe UNSAFE;
  • static final long nextOffset;
  • static {
  • try {
  • // 初始化UNSAFE常量
  • UNSAFE = sun.misc.Unsafe.getUnsafe();
  • Class k = HashEntry.class;
  • // 初始化nextOffset,用于Unsafe操作
  • nextOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("next"));
  • } catch (Exception e) {
  • throw new Error(e);
  • }
  • }
  • }

HashEntry类在实现中充当的角色是单向链表的节点,它具有泛型定义,泛型类型来自于ConcurrentHashMap的实例化过程;HashEntry存储了哈希值、键值对及后继节点数据。成员变量next指向了它的后继节点,对next变量的赋值是通过setNext(HashEntry)方法中的Unsafe方法进行的。另外需要注意的是,valuenext两个成员变量都是以volatile关键字修饰的,保证了线程间的可见性。

在Segment类中,HashEntry数组被定义为成员变量table,以transient和volatile两个关键字修饰,定义如下:

  • /**
  • * The per-segment table. Elements are accessed via
  • * entryAt/setEntryAt providing volatile semantics.
  • * 每个Segment Table装载HashEntry的数组,volatile修饰
  • * 元素主要通过具有volatile语义的entryAt/setEntryAt方法进行获取和设置
  • */
  • transient volatile HashEntry<K, V>[] table;

且ConcurrentHashMap对该table变量的访问提供了entryAt(HashEntry[], int)setEntryAt(HashEntry[], int, HashEntry)两个方法:

  • /**
  • * Gets the ith element of given table (if nonnull) with volatile
  • * read semantics. Note: This is manually integrated into a few
  • * performance-sensitive methods to reduce call overhead.
  • * 获取tab数组上i位置的元素,看不懂不要紧,后面有解释
  • */
  • @SuppressWarnings("unchecked")
  • static final <K, V> HashEntry<K, V> entryAt(HashEntry<K, V>[] tab, int i) {
  • return (tab == null) ? null : (HashEntry<K, V>) UNSAFE.getObjectVolatile(tab, ((long) i << TSHIFT) + TBASE);
  • }
  • /**
  • * Sets the ith element of given table, with volatile write
  • * semantics. (See above about use of putOrderedObject.)
  • * 设置tab数组上i位置的元素为e,看不懂不要紧,后面有解释
  • */
  • static final <K, V> void setEntryAt(HashEntry<K, V>[] tab, int i, HashEntry<K, V> e) {
  • UNSAFE.putOrderedObject(tab, ((long) i << TSHIFT) + TBASE, e);
  • }

3.2. Segment类的设计

ConcurrentHashMap类中的Segment类的设计就比较复杂了,事实上Segment中定义的方法承担了对ConcurrentHashMap中元素的增删改查的大部分功能;这里我们先了解Segment基础的设计,代码片段如下:

  • static final class Segment<K, V> extends ReentrantLock implements Serializable {
  • private static final long serialVersionUID = 2249069246763182397L;
  • /**
  • * The maximum number of times to tryLock in a prescan before
  • * possibly blocking on acquire in preparation for a locked
  • * segment operation. On multiprocessors, using a bounded
  • * number of retries maintains cache acquired while locating
  • * nodes.
  • * 尝试获取锁的最大扫描次数,当在多核计算机上为64次,单核计算机上为1次
  • */
  • static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
  • /**
  • * The per-segment table. Elements are accessed via
  • * entryAt/setEntryAt providing volatile semantics.
  • * 每个Segment Table装载HashEntry的数组,volatile修饰
  • * 元素主要通过具有volatile语义的entryAt/setEntryAt方法进行获取和设置
  • */
  • transient volatile HashEntry<K, V>[] table;
  • /**
  • * The number of elements. Accessed only either within locks
  • * or among other volatile reads that maintain visibility.
  • * 元素的个数
  • */
  • transient int count;
  • /**
  • * The total number of mutative operations in this segment.
  • * Even though this may overflows 32 bits, it provides
  • * sufficient accuracy for stability checks in CHM isEmpty()
  • * and size() methods. Accessed only either within locks or
  • * among other volatile reads that maintain visibility.
  • * 当前Segment的修改次数
  • */
  • transient int modCount;
  • /**
  • * The table is rehashed when its size exceeds this threshold.
  • * (The value of this field is always <tt>(int)(capacity *
  • * loadFactor)</tt>.)
  • * 当HashEntry数组大小超过这个极值,就会触发rehash操作
  • * threshold = (int)(capacity * loadFactor)
  • * 即 threshold = 容量 * 加载因子
  • */
  • transient int threshold;
  • /**
  • * The load factor for the hash table. Even though this value
  • * is same for all segments, it is replicated to avoid needing
  • * links to outer object.
  • * 加载因子,所有Segment都一样
  • * @serial
  • */
  • final float loadFactor;
  • // 从一个HashEntry数组构造Segment
  • Segment(float lf, int threshold, HashEntry<K, V>[] tab) {
  • this.loadFactor = lf;
  • this.threshold = threshold;
  • this.table = tab;
  • }
  • // 添加元素
  • final V put(K key, int hash, V value, boolean onlyIfAbsent) { ... }
  • /**
  • * Doubles size of table and repacks entries, also adding the
  • * given node to new table
  • */
  • @SuppressWarnings("unchecked")
  • private void rehash(HashEntry<K, V> node) { ... }
  • /**
  • * Scans for a node containing given key while trying to
  • * acquire lock, creating and returning one if not found. Upon
  • * return, guarantees that lock is held. UNlike in most
  • * methods, calls to method equals are not screened: Since
  • * traversal speed doesn't matter, we might as well help warm
  • * up the associated code and accesses as well.
  • *
  • * @return a new node if key not found, else null
  • */
  • private HashEntry<K, V> scanAndLockForPut(K key, int hash, V value) { ... }
  • /**
  • * Scans for a node containing the given key while trying to
  • * acquire lock for a remove or replace operation. Upon
  • * return, guarantees that lock is held. Note that we must
  • * lock even if the key is not found, to ensure sequential
  • * consistency of updates.
  • * 该方法与 {@link #scanAndLockForPut} 类似,但只有扫描上锁操作
  • */
  • private void scanAndLock(Object key, int hash) { ... }
  • /**
  • * Remove; match on key only if value null, else match both.
  • * 移除操作,返回移除的元素
  • */
  • final V remove(Object key, int hash, Object value) { ... }
  • // 替换节点,返回结果标识是否进行了更新
  • final boolean replace(K key, int hash, V oldValue, V newValue) { ... }
  • /**
  • * 替换节点的值为value,并返回旧值
  • * 与上面的replace相比这个方法不会比较旧值是否一致,同时返回值不一样,其他流程一样
  • */
  • final V replace(K key, int hash, V value) { ... }
  • // 清理操作,会将table中已有的节点全部置为null
  • final void clear() { ... }
  • }

Segment继承自ReentrantLock类,本身就是一个互斥锁,Segment也正是以这种方式实现了分段的互斥访问。Segment的成员变量比较简洁,真正复杂的设计在于它的各种方法的实现,上面的代码中只给出了方法的定义,关于方法的实现会在后面的内容中讲解。

3.3. ConcurrentHashMap类的设计

我们再来看看ConcurrentHashMap类的设计;ConcurrentHashMap的整体设计非常复杂,这里先关注基础设计,具体的各类操作将在后面的内容中讲解。

3.3.1. 静态代码块相关

先关注其静态代码块的部分,代码如下:

  • // Unsafe mechanics
  • private static final sun.misc.Unsafe UNSAFE;
  • // arrayBaseOffset方法是一个本地方法,可以获取数组第一个元素的偏移地址
  • // Segment数组中第一个Segment的地址偏移量
  • private static final long SBASE;
  • // arrayIndexScale方法是一个本地方法,可以计算数组中元素的增量地址
  • // SSHIFT是Segment数组中元素的增量地址
  • private static final int SSHIFT;
  • // HashEntry数组中第一个HashEntry的地址偏移量
  • private static final long TBASE;
  • // SSHIFT是HashEntry数组中元素的增量地址
  • private static final int TSHIFT;
  • // hashSeed的偏移量
  • private static final long HASHSEED_OFFSET;
  • static {
  • int ss, ts;
  • try {
  • UNSAFE = sun.misc.Unsafe.getUnsafe();
  • Class tc = HashEntry[].class;
  • Class sc = Segment[].class;
  • TBASE = UNSAFE.arrayBaseOffset(tc);
  • SBASE = UNSAFE.arrayBaseOffset(sc);
  • ts = UNSAFE.arrayIndexScale(tc);
  • ss = UNSAFE.arrayIndexScale(sc);
  • HASHSEED_OFFSET = UNSAFE.objectFieldOffset(ConcurrentHashMap.class.getDeclaredField("hashSeed"));
  • } catch (Exception e) {
  • throw new Error(e);
  • }
  • // ss和ts必然是2的次方
  • if ((ss & (ss - 1)) != 0 || (ts & (ts - 1)) != 0)
  • throw new Error("data type scale not a power of two");
  • // Segemnt数组中元素的增量地址值,按位数据转换后除掉高位补0后非零的位数
  • SSHIFT = 31 - Integer.numberOfLeadingZeros(ss);
  • // HashEntry数组中元素的增量地址值,按位数据转换后除掉高位补0后非零的位数
  • TSHIFT = 31 - Integer.numberOfLeadingZeros(ts);
  • }

ConcurrentHashMap也使用到了大量的Unsafe方法直接操作内存,在静态代码块中,ConcurrentHashMap首先初始化了几个重要的常量:SBASETBASESSHIFTTSHIFT;其中SBASETBASE是通过Unsafe类的静态方法arrayBaseOffset(Class)获取的,分别表示Segment数组和HashEntry数组里第一个元素的地址偏移量。

SSHIFTTSHIFT的计算相对复杂,以TSHIFT为例,首先通过Unsafe方法的arrayIndexScale(Class)方法获取HashEntry数组中元素的增量地址tsInteger.numberOfLeadingZeros(ts)可以获得ts在用二进制表示时高位0的数量,因此31 - Integer.numberOfLeadingZeros(ts)ts二进制表示下去除高位0后的有效位数;我们需要注意的是代码中有一段if判断如下:

  • if ((ss & (ss - 1)) != 0 || (ts & (ts - 1)) != 0)
  • throw new Error("data type scale not a power of two");

(ts & (ts - 1)) != 0是用来判断ts的值是否是2的次方,如果不是就会直接抛出Error,因此此处可以保证ssts的值都是2的次方。2的次方的数值在用二进制表示时有一个特点,即它只有一个二进制位为1,其他的都为0。所以其实最后得出的SSHIFTTSHIFT其实是ssts在使用二进制表示时唯一的一个1所处的位置。

注:得出了TSHIFT后,以1 << TSHIFT计算就可以得到ts;对于SSHIFTss也是如此。

为什么要这么设计呢?以前面提到的用到了TBASETSHIFT两个常量的两个方法为例:

  • // 获取tab数组上i位置的元素
  • @SuppressWarnings("unchecked")
  • static final <K, V> HashEntry<K, V> entryAt(HashEntry<K, V>[] tab, int i) {
  • return (tab == null) ? null : (HashEntry<K, V>) UNSAFE.getObjectVolatile(tab, ((long) i << TSHIFT) + TBASE);
  • }
  • // 设置tab数组上i位置的元素为e
  • static final <K, V> void setEntryAt(HashEntry<K, V>[] tab, int i, HashEntry<K, V> e) {
  • UNSAFE.putOrderedObject(tab, ((long) i << TSHIFT) + TBASE, e);
  • }

在获取或设置类型为HashEntry<K, V>[]tab数组上的元素时,都是用了((long) i << TSHIFT) + TBASE的方式定位元素;其中TBASE表示HashEntry数组里第一个元素的地址偏移量,在这里也即是表示tab数组里第一个元素的地址偏移量,由于数组是线性存储的,而HashEntry数组里每个元素的增量地址为tsts可以表示为1 << TSHIFT,因此,第i个元素相对于第一个元素的地址偏移量TBASE的偏移量即为i << TSHIFT

同理,ConcurrentHashMap中对SBASESSHIFT运用也一样,如方法segmentAt(Segment[], int),用于获取Segment数组特定索引位置上的元素:

  • @SuppressWarnings("unchecked")
  • static final <K, V> Segment<K, V> segmentAt(Segment<K, V>[] ss, int j) {
  • long u = (j << SSHIFT) + SBASE;
  • return ss == null ? null : (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u);
  • }

通过对地址偏移量和位运算的计算,可以快速地定位内存中数组里的元素。

3.3.2. 各类变量和常量

ConcurrentHashMap中用到的变量和常量如下:

  • // 默认初始容量
  • static final int DEFAULT_INITIAL_CAPACITY = 16;
  • // 加载因子
  • static final float DEFAULT_LOAD_FACTOR = 0.75f;
  • // 默认并发级别
  • static final int DEFAULT_CONCURRENCY_LEVEL = 16;
  • // 最大容量,1,073,741,824
  • static final int MAXIMUM_CAPACITY = 1 << 30;
  • // 分段表最小容量
  • static final int MIN_SEGMENT_TABLE_CAPACITY = 2;
  • // 分段表最大数量,65,535
  • static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
  • /**
  • * Number of unsynchronized retries in size and containsValue
  • * methods before resorting to locking. This is used to avoid
  • * unbounded retries if tables undergo continuous modification
  • * which would make it impossible to obtain an accurate result.
  • * 在size()方法和containsValue()方法中上锁钱的重试次数,
  • * 可避免因哈希表被其他线程修改导致的无限重试
  • */
  • static final int RETRIES_BEFORE_LOCK = 2;
  • /**
  • * Mask value for indexing into segments. The upper bits of a
  • * key's hash code are used to choose the segment.
  • * 用于定位Segment的Mask码
  • */
  • final int segmentMask;
  • /**
  • * Shift value for indexing within segments.
  • */
  • final int segmentShift;
  • /**
  • * The segments, each of which is a specialized hash table.
  • * 存放所有的Segment
  • */
  • final Segment<K, V>[] segments;
  • // 遍历相关
  • transient Set<K> keySet;
  • transient Set<Map.Entry<K, V>> entrySet;
  • transient Collection<V> values;
  • /**
  • * A randomizing value associated with this instance that is applied to
  • * hash code of keys to make hash collisions harder to find.
  • * 随机Hash种子
  • */
  • private transient final int hashSeed = randomHashSeed(this);
  • // 产生随机哈希种子的随机方法
  • private static int randomHashSeed(ConcurrentHashMap instance) {
  • if (sun.misc.VM.isBooted() && Holder.ALTERNATIVE_HASHING) {
  • return sun.misc.Hashing.randomHashSeed(instance);
  • }
  • return 0;
  • }

上面只给出了相关变量和常量的定义和注释,具体的用法大家可以先不关注,在后面的内容中会一一道来。

从ConcurrentHashMap的基础设计我们可以大致推断出ConcurrentHashMap的内部结构:

  1. 每个ConcurrentHashMap实例都有一个数组类型的成员变量segments,装载Segment实例。
  2. 每个Segment实例内部都有一个数组类型的成员变量table,装载HashEntry实例;多个HashEntry实例构成了一条单向链表,而table中的每个HashEntry实例其实是一条单向链表的头节点。

大致的示意图如下:

1.ConcurrentHashMap结构示意图.png

3.4. ConcurrentHashMap源码完整注释

下面先给出ConcurrentHashMap源码的完整注释版本,关于源码的讲解将在下一篇文章中详细剖析。

  • package java.util.concurrent;
  • import java.util.concurrent.locks.*;
  • import java.util.*;
  • import java.io.Serializable;
  • import java.io.IOException;
  • import java.io.ObjectInputStream;
  • import java.io.ObjectOutputStream;
  • public class ConcurrentHashMap<K, V> extends AbstractMap<K, V>
  • implements ConcurrentMap<K, V>, Serializable {
  • private static final long serialVersionUID = 7249069246763182397L;
  • /*
  • * The basic strategy is to subdivide the table among Segments,
  • * each of which itself is a concurrently readable hash table. To
  • * reduce footprint, all but one segments are constructed only
  • * when first needed (see ensureSegment). To maintain visibility
  • * in the presence of lazy construction, accesses to segments as
  • * well as elements of segment's table must use volatile access,
  • * which is done via Unsafe within methods segmentAt etc
  • * below. These provide the functionality of AtomicReferenceArrays
  • * but reduce the levels of indirection. Additionally,
  • * volatile-writes of table elements and entry "next" fields
  • * within locked operations use the cheaper "lazySet" forms of
  • * writes (via putOrderedObject) because these writes are always
  • * followed by lock releases that maintain sequential consistency
  • * of table updates.
  • *
  • * Historical note: The previous version of this class relied
  • * heavily on "final" fields, which avoided some volatile reads at
  • * the expense of a large initial footprint. Some remnants of
  • * that design (including forced construction of segment 0) exist
  • * to ensure serialization compatibility.
  • */
  • /* ---------------- Constants -------------- */
  • // 默认初始容量
  • static final int DEFAULT_INITIAL_CAPACITY = 16;
  • // 加载因子
  • static final float DEFAULT_LOAD_FACTOR = 0.75f;
  • // 默认并发级别
  • static final int DEFAULT_CONCURRENCY_LEVEL = 16;
  • // 最大容量,1,073,741,824
  • static final int MAXIMUM_CAPACITY = 1 << 30;
  • // 分段表最小容量
  • static final int MIN_SEGMENT_TABLE_CAPACITY = 2;
  • // 分段表最大数量,也是最大并发级别,65,535
  • static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
  • /**
  • * Number of unsynchronized retries in size and containsValue
  • * methods before resorting to locking. This is used to avoid
  • * unbounded retries if tables undergo continuous modification
  • * which would make it impossible to obtain an accurate result.
  • * 在size()方法和containsValue()方法中上锁钱的重试次数,
  • * 可避免因哈希表被其他线程修改导致的无限重试
  • */
  • static final int RETRIES_BEFORE_LOCK = 2;
  • /* ---------------- Fields -------------- */
  • /**
  • * holds values which can't be initialized until after VM is booted.
  • * 存放一些在虚拟机启动后才能初始化的值
  • */
  • private static class Holder {
  • /**
  • * Enable alternative hashing of String keys?
  • *
  • * <p>Unlike the other hash map implementations we do not implement a
  • * threshold for regulating whether alternative hashing is used for
  • * String keys. Alternative hashing is either enabled for all instances
  • * or disabled for all instances.
  • */
  • static final boolean ALTERNATIVE_HASHING;
  • static {
  • // Use the "threshold" system property even though our threshold behaviour is "ON" or "OFF".
  • // 获得JVM的参数jdk.map.althashing.threshold的值,这个值将作为阈值
  • String altThreshold = java.security.AccessController.doPrivileged(new sun.security.action.GetPropertyAction("jdk.map.althashing.threshold"));
  • int threshold;
  • try {
  • // 强转altThreshold,如果altThreshold为null则取2,147,483,647
  • threshold = (null != altThreshold) ? Integer.parseInt(altThreshold) : Integer.MAX_VALUE;
  • // disable alternative hashing if -1
  • // 当threshold为-1时,关掉alternative hashing
  • if (threshold == -1) {
  • threshold = Integer.MAX_VALUE;
  • }
  • if (threshold < 0) {
  • throw new IllegalArgumentException("value must be positive integer.");
  • }
  • } catch (IllegalArgumentException failed) {
  • throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
  • }
  • /**
  • * 当threshold <= 1,073,741,824时ALTERNATIVE_HASHING为true
  • * 其实这种情况表明,当-Djdk.map.althashing.threshold参数传入-1时,ALTERNATIVE_HASHING必然为false,也即关掉alternative hashing
  • */
  • ALTERNATIVE_HASHING = threshold <= MAXIMUM_CAPACITY;
  • }
  • }
  • /**
  • * A randomizing value associated with this instance that is applied to
  • * hash code of keys to make hash collisions harder to find.
  • * 随机Hash种子,这个值取决于 {@link Holder#ALTERNATIVE_HASHING}
  • */
  • private transient final int hashSeed = randomHashSeed(this);
  • // 根据Holder.ALTERNATIVE_HASHING计算得到hashSeed
  • private static int randomHashSeed(ConcurrentHashMap instance) {
  • if (sun.misc.VM.isBooted() && Holder.ALTERNATIVE_HASHING) {
  • return sun.misc.Hashing.randomHashSeed(instance);
  • }
  • return 0;
  • }
  • /**
  • * Mask value for indexing into segments. The upper bits of a
  • * key's hash code are used to choose the segment.
  • * 用于定位Segment的Mask码
  • */
  • final int segmentMask;
  • /**
  • * Shift value for indexing within segments.
  • */
  • final int segmentShift;
  • /**
  • * The segments, each of which is a specialized hash table.
  • * 存放所有的Segment
  • */
  • final Segment<K, V>[] segments;
  • // 遍历相关
  • transient Set<K> keySet;
  • transient Set<Map.Entry<K, V>> entrySet;
  • transient Collection<V> values;
  • /**
  • * ConcurrentHashMap list entry. Note that this is never exported
  • * out as a user-visible Map.Entry.
  • */
  • static final class HashEntry<K, V> {
  • // hash值
  • final int hash;
  • // 键
  • final K key;
  • // 值,volatile修饰,实现线程可见性
  • volatile V value;
  • // 下一个HashEntry,构成链表
  • volatile HashEntry<K, V> next;
  • // 构造方法
  • HashEntry(int hash, K key, V value, HashEntry<K, V> next) {
  • this.hash = hash;
  • this.key = key;
  • this.value = value;
  • this.next = next;
  • }
  • // Unsafe方式设置后继HashEntry
  • final void setNext(HashEntry<K, V> n) {
  • UNSAFE.putOrderedObject(this, nextOffset, n);
  • }
  • // Unsafe相关
  • static final sun.misc.Unsafe UNSAFE;
  • static final long nextOffset;
  • static {
  • try {
  • // 初始化UNSAFE常量
  • UNSAFE = sun.misc.Unsafe.getUnsafe();
  • Class k = HashEntry.class;
  • // 初始化nextOffset,用于Unsafe操作
  • nextOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("next"));
  • } catch (Exception e) {
  • throw new Error(e);
  • }
  • }
  • }
  • /**
  • * Gets the ith element of given table (if nonnull) with volatile
  • * read semantics. Note: This is manually integrated into a few
  • * performance-sensitive methods to reduce call overhead.
  • * 获取tab数组上i位置的元素
  • */
  • @SuppressWarnings("unchecked")
  • static final <K, V> HashEntry<K, V> entryAt(HashEntry<K, V>[] tab, int i) {
  • return (tab == null) ? null : (HashEntry<K, V>) UNSAFE.getObjectVolatile(tab, ((long) i << TSHIFT) + TBASE);
  • }
  • /**
  • * Sets the ith element of given table, with volatile write
  • * semantics. (See above about use of putOrderedObject.)
  • * 设置tab数组上i位置的元素为e
  • */
  • static final <K, V> void setEntryAt(HashEntry<K, V>[] tab, int i, HashEntry<K, V> e) {
  • UNSAFE.putOrderedObject(tab, ((long) i << TSHIFT) + TBASE, e);
  • }
  • /**
  • * Applies a supplemental hash function to a given hashCode, which
  • * defends against poor quality hash functions. This is critical
  • * because ConcurrentHashMap uses power-of-two length hash tables,
  • * that otherwise encounter collisions for hashCodes that do not
  • * differ in lower or upper bits.
  • * hash方法,根据k计算hash值
  • */
  • private int hash(Object k) {
  • int h = hashSeed;
  • if ((0 != h) && (k instanceof String)) {
  • return sun.misc.Hashing.stringHash32((String) k);
  • }
  • h ^= k.hashCode();
  • // Spread bits to regularize both segment and index locations,
  • // using variant of single-word Wang/Jenkins hash.
  • h += (h << 15) ^ 0xffffcd7d;
  • h ^= (h >>> 10);
  • h += (h << 3);
  • h ^= (h >>> 6);
  • h += (h << 2) + (h << 14);
  • return h ^ (h >>> 16);
  • }
  • /**
  • * Segments are specialized versions of hash tables. This
  • * subclasses from ReentrantLock opportunistically, just to
  • * simplify some locking and avoid separate construction.
  • * 继承自ReentrantLock,即自身是一个重入锁
  • */
  • static final class Segment<K, V> extends ReentrantLock implements Serializable {
  • private static final long serialVersionUID = 2249069246763182397L;
  • /**
  • * The maximum number of times to tryLock in a prescan before
  • * possibly blocking on acquire in preparation for a locked
  • * segment operation. On multiprocessors, using a bounded
  • * number of retries maintains cache acquired while locating
  • * nodes.
  • * 尝试获取锁的最大扫描次数,当在多核计算机上为64次,单核计算机上为1次
  • */
  • static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
  • /**
  • * The per-segment table. Elements are accessed via
  • * entryAt/setEntryAt providing volatile semantics.
  • * 每个Segment Table装载HashEntry的数组,volatile修饰
  • * 元素主要通过具有volatile语义的entryAt/setEntryAt方法进行获取和设置
  • */
  • transient volatile HashEntry<K, V>[] table;
  • /**
  • * The number of elements. Accessed only either within locks
  • * or among other volatile reads that maintain visibility.
  • * 元素的个数
  • */
  • transient int count;
  • /**
  • * The total number of mutative operations in this segment.
  • * Even though this may overflows 32 bits, it provides
  • * sufficient accuracy for stability checks in CHM isEmpty()
  • * and size() methods. Accessed only either within locks or
  • * among other volatile reads that maintain visibility.
  • * 当前Segment的修改次数
  • */
  • transient int modCount;
  • /**
  • * The table is rehashed when its size exceeds this threshold.
  • * (The value of this field is always <tt>(int)(capacity *
  • * loadFactor)</tt>.)
  • * 当HashEntry数组大小超过这个极值,就会触发rehash操作
  • * threshold = (int)(capacity * loadFactor)
  • * 即 threshold = 容量 * 加载因子
  • */
  • transient int threshold;
  • /**
  • * The load factor for the hash table. Even though this value
  • * is same for all segments, it is replicated to avoid needing
  • * links to outer object.
  • * 加载因子,所有Segment都一样
  • *
  • * @serial
  • */
  • final float loadFactor;
  • // 从一个HashEntry数组构造Segment
  • Segment(float lf, int threshold, HashEntry<K, V>[] tab) {
  • this.loadFactor = lf;
  • this.threshold = threshold;
  • this.table = tab;
  • }
  • // 添加元素
  • final V put(K key, int hash, V value, boolean onlyIfAbsent) {
  • /**
  • * 先尝试获取锁,tryLock()方法是继承自ReentrantLock的方法
  • * 如果获取到锁了,node为null,否则执行scanAndLockForPut(key, hash, value)获取node
  • * scanAndLockForPut()内部会不断自旋尝试获取锁,一旦其返回了即说明获取到锁了
  • * 如果返回值不为null,表示是相应HashEntry链表的头节点
  • *
  • * 当获取到锁之后,会根据hash找到对应的HashEntry链表(此处具体实现是找到链表的头元素)
  • * 然后在链表中根据key进行遍历查找
  • * 如果查找到了key相同的HashEntry,就判断onlyIfAbsent是否为true,如果是就更新该HashEntry的值为value,否则直接跳出
  • * 如果没有查找到,可能有两种情况,链表为空或者确实没有
  • * 则根据传入的key、hash、value创建一个HashEntry,设置到index位置上
  • */
  • HashEntry<K, V> node = tryLock() ? null : scanAndLockForPut(key, hash, value);
  • V oldValue;
  • try {
  • HashEntry<K, V>[] tab = table;
  • // 计算index,即 (table的长度 - 1) & hash值
  • int index = (tab.length - 1) & hash;
  • // 获取table上索引为index上的元素
  • HashEntry<K, V> first = entryAt(tab, index);
  • for (HashEntry<K, V> e = first; ; ) {
  • if (e != null) {
  • // 获取到的元素不为null,表示index上已经有元素了
  • K k;
  • if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
  • // 走到这里,表示put进的键与已经存在于index位置上HashEntry的键相同
  • // 先记录旧值
  • oldValue = e.value;
  • if (!onlyIfAbsent) {
  • /**
  • * 如果没有指定onlyIfAbsent为true,即没有要求必须是不存在的情况下添加
  • * 就更新index位置上已存在的元素的值为value
  • */
  • e.value = value;
  • // modCount自增
  • ++modCount;
  • }
  • break;
  • }
  • // 走到这里表示key与e.key不相同,则遍历链表下一个元素进行判断
  • e = e.next;
  • } else {
  • // 获取到的元素为null,表示index上没有元素
  • if (node != null)
  • // 如果node不为null,将first设置为node的后继元素
  • node.setNext(first);
  • else
  • // 如果node为null,则创建一个新的HashEntry赋值给node,并将first作为node的后继节点
  • node = new HashEntry<K, V>(hash, key, value, first);
  • /**
  • * 计算Segment中元素个数是否超过阈值,如果超过了就进行rehash操作
  • * 否则将node放在table的index位置上
  • */
  • int c = count + 1;
  • if (c > threshold && tab.length < MAXIMUM_CAPACITY)
  • rehash(node);
  • else
  • setEntryAt(tab, index, node);
  • // 更新modCount
  • ++modCount;
  • // 更新Segment元素个数计数器
  • count = c;
  • oldValue = null;
  • break;
  • }
  • }
  • } finally {
  • // 解锁
  • unlock();
  • }
  • return oldValue;
  • }
  • /**
  • * Doubles size of table and repacks entries, also adding the
  • * given node to new table
  • *
  • * 重哈希操作,会对Segment进行扩容,然后将传入的节点添加到Segment的table数组中
  • */
  • @SuppressWarnings("unchecked")
  • private void rehash(HashEntry<K, V> node) {
  • /*
  • * Reclassify nodes in each list to new table. Because we
  • * are using power-of-two expansion, the elements from
  • * each bin must either stay at same index, or move with a
  • * power of two offset. We eliminate unnecessary node
  • * creation by catching cases where old nodes can be
  • * reused because their next fields won't change.
  • * Statistically, at the default threshold, only about
  • * one-sixth of them need cloning when a table
  • * doubles. The nodes they replace will be garbage
  • * collectable as soon as they are no longer referenced by
  • * any reader thread that may be in the midst of
  • * concurrently traversing table. Entry accesses use plain
  • * array indexing because they are followed by volatile
  • * table write.
  • */
  • // 记录旧table
  • HashEntry<K, V>[] oldTable = table;
  • // 记录旧的容量
  • int oldCapacity = oldTable.length;
  • // 新的容量是旧的容量的两倍
  • int newCapacity = oldCapacity << 1;
  • // 计算新的阈值
  • threshold = (int) (newCapacity * loadFactor);
  • // 先创建一个新的HashEntry数组newtable
  • HashEntry<K, V>[] newTable = (HashEntry<K, V>[]) new HashEntry[newCapacity];
  • int sizeMask = newCapacity - 1;
  • for (int i = 0; i < oldCapacity; i++) {
  • // 遍历取出旧元素e
  • HashEntry<K, V> e = oldTable[i];
  • if (e != null) {
  • // e的后继
  • HashEntry<K, V> next = e.next;
  • // 计算新的index为idx
  • int idx = e.hash & sizeMask;
  • if (next == null) // Single node on list
  • // 如果e的后继为空,表示e是一个单节点,则直接将其存入newTable的idx位置
  • newTable[idx] = e;
  • else {
  • // Reuse consecutive sequence at same slot
  • // 走到这里说明e是一个HashEntry链的头节点
  • HashEntry<K, V> lastRun = e;
  • int lastIdx = idx;
  • /**
  • * 这是一种优化做法,可以将一部分新索引一致的元素一次性迁移
  • * 从next开始向后遍历,该过程主要是找到第一个新的index与旧的index不同的节点
  • * 通过这个遍历过程,可以找到由于HashEntry链容量较之原来扩大了一倍,计算出的新的index与原来不同的第一个元素,赋值给lastRun
  • * 而这个lastRun元素之后的所有元素的新的index都应该与之相同,否则后续的遍历还会更新lastRun的指向
  • */
  • for (HashEntry<K, V> last = next; last != null; last = last.next) {
  • // 计算当前遍历到的节点新的index为k
  • int k = last.hash & sizeMask;
  • if (k != lastIdx) {
  • // 如果当前节点新的index与前驱节点的新的index不同,则记录当前节点新的index为lastIdx
  • lastIdx = k;
  • // 记录当前节点为lastRun
  • lastRun = last;
  • }
  • }
  • /**
  • * 将上面计算出来的lastRun放在newTable的lastIdx位置的槽中,这个lastIdx也是上面新计算出来的
  • * 由于HashEntry链是一个链表,因此lastRun后面的元素也会一并移动到newTable的lastIdx位置的槽中
  • */
  • newTable[lastIdx] = lastRun;
  • // 复制e元素对应的HashEntry链上剩余的元素到新的newTable中
  • for (HashEntry<K, V> p = e; p != lastRun; p = p.next) {
  • V v = p.value;
  • int h = p.hash;
  • // 计算新的index为k
  • int k = h & sizeMask;
  • // 取出原来k位置上的元素n
  • HashEntry<K, V> n = newTable[k];
  • // 构建新节点,将n作为新节点的后继节点,然后将新节点放在newTable的k位置上,相当于把新节点插在链表头部
  • newTable[k] = new HashEntry<K, V>(h, p.key, v, n);
  • }
  • }
  • }
  • }
  • // 将传入的node添加到对应的HashEntry链表头
  • int nodeIndex = node.hash & sizeMask; // add the new node
  • node.setNext(newTable[nodeIndex]);
  • newTable[nodeIndex] = node;
  • // 将newTable赋值给table
  • table = newTable;
  • }
  • /**
  • * Scans for a node containing given key while trying to
  • * acquire lock, creating and returning one if not found. Upon
  • * return, guarantees that lock is held. UNlike in most
  • * methods, calls to method equals are not screened: Since
  • * traversal speed doesn't matter, we might as well help warm
  • * up the associated code and accesses as well.
  • *
  • * 扫描元素并尝试加锁,扫描元素主要用于寻找与给定key相同的HashEntry节点,
  • * 如果没找到就创建一个新的节点,然后将这个节点返回。
  • * 该方法可以保证锁已获取到
  • *
  • * @return a new node if key not found, else null
  • */
  • private HashEntry<K, V> scanAndLockForPut(K key, int hash, V value) {
  • // 尝试获取HashEntry链的第一个元素
  • HashEntry<K, V> first = entryForHash(this, hash);
  • HashEntry<K, V> e = first;
  • HashEntry<K, V> node = null;
  • int retries = -1; // negative while locating node
  • while (!tryLock()) {
  • // 走到这里说明获取锁失败
  • HashEntry<K, V> f; // to recheck first below
  • if (retries < 0) {
  • // 如果重试次数retries小于0,表示可能是第一次尝试
  • /**
  • * 从entryForHash()方法可知,e为null有两种情况:
  • * 1. Segment为空
  • * 2. Segment的table为空则返回空
  • */
  • if (e == null) {
  • // 创建一个HashEntry节点
  • if (node == null) // speculatively create node
  • node = new HashEntry<K, V>(hash, key, value, null);
  • // retries次数置为0,下次就不会走if (retries < 0)的分支了
  • retries = 0;
  • } else if (key.equals(e.key))
  • // 当e不为null,且key与e.key相同时,将retries次数置为0
  • retries = 0;
  • else
  • // 走到这里,说明e不为null,且key与e.key不同,则继续往后查找
  • e = e.next;
  • } else if (++retries > MAX_SCAN_RETRIES) {
  • // 当重试次数超过限制,强制上锁(可能会进入阻塞状态)
  • lock();
  • break;
  • } else if ( // 是否是偶数次尝试
  • (retries & 1) == 0 &&
  • // 重新尝试获取HashEntry链的第一个元素,然后判断本次获取的第一个元素是否与之前的first相同
  • (f = entryForHash(this, hash)) != first) {
  • /**
  • * 走到这里说明是偶数次重试,并且HashEntry链的第一个元素发生了改变
  • * 如果是的话,就应该重新遍历,
  • * 将e和first置为新获取的HashEntry链第一个元素
  • * 重置retries为-1,然后进入下一次循环
  • */
  • e = first = f; // re-traverse if entry changed
  • retries = -1;
  • }
  • }
  • /**
  • * 该方法最后返回node,即新创建的HashEntry表头节点
  • * 只有在tryLock()方法或lock()获取到锁的时候才会返回,否则会一直自旋
  • */
  • return node;
  • }
  • /**
  • * Scans for a node containing the given key while trying to
  • * acquire lock for a remove or replace operation. Upon
  • * return, guarantees that lock is held. Note that we must
  • * lock even if the key is not found, to ensure sequential
  • * consistency of updates.
  • * 该方法与 {@link #scanAndLockForPut} 类似,但只有扫描上锁操作
  • */
  • private void scanAndLock(Object key, int hash) {
  • // similar to but simpler than scanAndLockForPut
  • HashEntry<K, V> first = entryForHash(this, hash);
  • HashEntry<K, V> e = first;
  • int retries = -1;
  • while (!tryLock()) {
  • HashEntry<K, V> f;
  • if (retries < 0) {
  • if (e == null || key.equals(e.key))
  • retries = 0;
  • else
  • e = e.next;
  • } else if (++retries > MAX_SCAN_RETRIES) {
  • lock();
  • break;
  • } else if ((retries & 1) == 0 &&
  • (f = entryForHash(this, hash)) != first) {
  • e = first = f;
  • retries = -1;
  • }
  • }
  • }
  • /**
  • * Remove; match on key only if value null, else match both.
  • * 移除操作,返回移除的元素
  • */
  • final V remove(Object key, int hash, Object value) {
  • // 先尝试上锁
  • if (!tryLock())
  • scanAndLock(key, hash);
  • V oldValue = null;
  • try {
  • HashEntry<K, V>[] tab = table;
  • int index = (tab.length - 1) & hash;
  • // 获取对应的元素e
  • HashEntry<K, V> e = entryAt(tab, index);
  • HashEntry<K, V> pred = null;
  • while (e != null) {
  • K k;
  • // e的后继
  • HashEntry<K, V> next = e.next;
  • // key相同,hash值也相同
  • if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
  • // 获取元素的value
  • V v = e.value;
  • if (value == null || value == v || value.equals(v)) {
  • if (pred == null)
  • // 如果pred为null,将e的后继节点放在e现在的位置上
  • setEntryAt(tab, index, next);
  • else
  • // 否则将e的后继节点作为pred的后继,即跳过了原来的e节点
  • pred.setNext(next);
  • // 更新modCount计数器
  • ++modCount;
  • // 更新map大小
  • --count;
  • // 赋值给oldValue
  • oldValue = v;
  • }
  • break;
  • }
  • // 如果key不同,hash值也不同,则往后遍历
  • pred = e;
  • e = next;
  • }
  • } finally {
  • // 解锁
  • unlock();
  • }
  • return oldValue;
  • }
  • // 替换节点,返回结果标识是否进行了更新
  • final boolean replace(K key, int hash, V oldValue, V newValue) {
  • // 上锁
  • if (!tryLock())
  • scanAndLock(key, hash);
  • boolean replaced = false;
  • try {
  • HashEntry<K, V> e;
  • // 遍历
  • for (e = entryForHash(this, hash); e != null; e = e.next) {
  • K k;
  • if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
  • // 走到这里说明找到了key和hash都相同的节点
  • if (oldValue.equals(e.value)) {
  • // 如果找到的节点当前的value与oldValue一致则进行修改
  • e.value = newValue;
  • // 更新计数器
  • ++modCount;
  • replaced = true;
  • }
  • break;
  • }
  • }
  • } finally {
  • // 解锁
  • unlock();
  • }
  • // 返回的结果标识是否进行了更新
  • return replaced;
  • }
  • /**
  • * 替换节点的值为value,并返回旧值
  • * 与上面的replace相比这个方法不会比较旧值是否一致,同时返回值不一样,其他流程一样
  • */
  • final V replace(K key, int hash, V value) {
  • // 加锁
  • if (!tryLock())
  • scanAndLock(key, hash);
  • V oldValue = null;
  • try {
  • HashEntry<K, V> e;
  • // 遍历寻找节点
  • for (e = entryForHash(this, hash); e != null; e = e.next) {
  • K k;
  • if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
  • // key相同,hash值也相同,表示找到了节点
  • // 记录节点旧值
  • oldValue = e.value;
  • // 修改节点的value值
  • e.value = value;
  • ++modCount;
  • break;
  • }
  • }
  • } finally {
  • // 解锁
  • unlock();
  • }
  • // 返回旧值
  • return oldValue;
  • }
  • // 清理操作,会将table中已有的节点全部置为null
  • final void clear() {
  • lock();
  • try {
  • // 遍历table数组,将每个元素都置为null
  • HashEntry<K, V>[] tab = table;
  • for (int i = 0; i < tab.length; i++)
  • setEntryAt(tab, i, null);
  • ++modCount;
  • count = 0;
  • } finally {
  • unlock();
  • }
  • }
  • }
  • // Accessing segments
  • /**
  • * Gets the jth element of given segment array (if nonnull) with
  • * volatile element access semantics via Unsafe. (The null check
  • * can trigger harmlessly only during deserialization.) Note:
  • * because each element of segments array is set only once (using
  • * fully ordered writes), some performance-sensitive methods rely
  • * on this method only as a recheck upon null reads.
  • * 获取Segment数组ss中索引为j的Segment元素
  • */
  • @SuppressWarnings("unchecked")
  • static final <K, V> Segment<K, V> segmentAt(Segment<K, V>[] ss, int j) {
  • long u = (j << SSHIFT) + SBASE;
  • return ss == null ? null : (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u);
  • }
  • /**
  • * Returns the segment for the given index, creating it and
  • * recording in segment table (via CAS) if not already present.
  • * 返回指定索引位置的Segment,当不存在时将创建它
  • *
  • * @param k the index
  • * @return the segment
  • */
  • @SuppressWarnings("unchecked")
  • private Segment<K, V> ensureSegment(int k) {
  • // 获取Segment数组
  • final Segment<K, V>[] ss = this.segments;
  • // 计算索引为k的Segment在Segment数组中的原始偏移量
  • long u = (k << SSHIFT) + SBASE; // raw offset
  • Segment<K, V> seg;
  • // 尝试获取
  • if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) {
  • // 走到这里表示获取到的Segment为null
  • // 使用索引为0位置上的Segment作为原型,主要将其table长度、加载因子等值用于新Segment的创建
  • Segment<K, V> proto = ss[0]; // use segment 0 as prototype
  • // table长度
  • int cap = proto.table.length;
  • // 加载因子
  • float lf = proto.loadFactor;
  • // 阈值
  • int threshold = (int) (cap * lf);
  • // 创建HashEntry链数组
  • HashEntry<K, V>[] tab = (HashEntry<K, V>[]) new HashEntry[cap];
  • // 重新检查索引为k的Segment是否为null
  • if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) { // recheck
  • // 创建新的Segment
  • Segment<K, V> s = new Segment<K, V>(lf, threshold, tab);
  • while ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) {
  • // CAS方式赋值
  • if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
  • // 赋值成功即跳出while循环
  • break;
  • }
  • }
  • }
  • return seg;
  • }
  • // Hash-based segment and entry accesses
  • /**
  • * Get the segment for the given hash
  • */
  • @SuppressWarnings("unchecked")
  • private Segment<K, V> segmentForHash(int h) {
  • long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
  • return (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u);
  • }
  • /**
  • * Gets the table entry for the given segment and hash
  • * 获取某个Segment中相应hash值的HashEntry
  • */
  • @SuppressWarnings("unchecked")
  • static final <K, V> HashEntry<K, V> entryForHash(Segment<K, V> seg, int h) {
  • HashEntry<K, V>[] tab;
  • /**
  • * 如果Segment为空或者Segment的table为空则返回空
  • * 否则通过Unsafe方法获取相应的HashEntry
  • */
  • return (seg == null || (tab = seg.table) == null) ? null :
  • (HashEntry<K, V>) UNSAFE.getObjectVolatile
  • (tab, ((long) (((tab.length - 1) & h)) << TSHIFT) + TBASE);
  • }
  • /* ---------------- Public operations -------------- */
  • /**
  • * Creates a new, empty map with the specified initial
  • * capacity, load factor and concurrency level.
  • * 根据初始容量、平衡因子和并发级别创建一个ConcurrentHashMap
  • *
  • * @param initialCapacity 初始容量
  • * @param loadFactor 平衡因子,用于计算阈值
  • * @param concurrencyLevel 并发级别
  • * @throws IllegalArgumentException 当参数不规范时会抛出异常
  • */
  • @SuppressWarnings("unchecked")
  • public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
  • if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
  • throw new IllegalArgumentException();
  • // 检查并发级别,从这里可以得知,最大并发级别为65,535
  • if (concurrencyLevel > MAX_SEGMENTS)
  • concurrencyLevel = MAX_SEGMENTS;
  • // Find power-of-two sizes best matching arguments
  • /**
  • * 根据并发级别,来计算Segment的最大个数,最大为65535
  • * 其中sshift记录了左移次数,ssize则计算了大小
  • */
  • int sshift = 0;
  • int ssize = 1;
  • // 2的次方,不小于concurrencyLevel的那个值
  • while (ssize < concurrencyLevel) {
  • ++sshift;
  • ssize <<= 1;
  • }
  • // 这两个值后面会讲解
  • this.segmentShift = 32 - sshift;
  • this.segmentMask = ssize - 1;
  • /**
  • * 检查哈希表的初始容量
  • * 哈希表的实际容量 = Segments的容量 * Segment中数组的长度
  • */
  • if (initialCapacity > MAXIMUM_CAPACITY)
  • initialCapacity = MAXIMUM_CAPACITY;
  • // 计算每个Segment Table,即HashEntry链,应该装载的数据量
  • int c = initialCapacity / ssize;
  • if (c * ssize < initialCapacity)
  • ++c;
  • int cap = MIN_SEGMENT_TABLE_CAPACITY;
  • // cap即是Segments中HashEntry链的长度,2的次方,不小于c的那个值
  • while (cap < c)
  • cap <<= 1;
  • // create segments and segments[0]
  • // 创建Segment数组的第0个Segment对象s0
  • Segment<K, V> s0 = new Segment<K, V>(loadFactor, (int) (cap * loadFactor), (HashEntry<K, V>[]) new HashEntry[cap]);
  • // 创建Segment数组
  • Segment<K, V>[] ss = (Segment<K, V>[]) new Segment[ssize];
  • // 将s0装入到Segment数组ss索引为0的位置
  • UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
  • this.segments = ss;
  • }
  • /**
  • * Creates a new, empty map with the specified initial capacity
  • * and load factor and with the default concurrencyLevel (16).
  • *
  • * @param initialCapacity The implementation performs internal
  • * sizing to accommodate this many elements.
  • * @param loadFactor the load factor threshold, used to control resizing.
  • * Resizing may be performed when the average number of elements per
  • * bin exceeds this threshold.
  • * @throws IllegalArgumentException if the initial capacity of
  • * elements is negative or the load factor is nonpositive
  • * @since 1.6
  • */
  • public ConcurrentHashMap(int initialCapacity, float loadFactor) {
  • this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);
  • }
  • /**
  • * Creates a new, empty map with the specified initial capacity,
  • * and with default load factor (0.75) and concurrencyLevel (16).
  • *
  • * @param initialCapacity the initial capacity. The implementation
  • * performs internal sizing to accommodate this many elements.
  • * @throws IllegalArgumentException if the initial capacity of
  • * elements is negative.
  • */
  • public ConcurrentHashMap(int initialCapacity) {
  • this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
  • }
  • /**
  • * Creates a new, empty map with a default initial capacity (16),
  • * load factor (0.75) and concurrencyLevel (16).
  • */
  • public ConcurrentHashMap() {
  • this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
  • }
  • /**
  • * Creates a new map with the same mappings as the given map.
  • * The map is created with a capacity of 1.5 times the number
  • * of mappings in the given map or 16 (whichever is greater),
  • * and a default load factor (0.75) and concurrencyLevel (16).
  • *
  • * @param m the map
  • */
  • public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
  • this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
  • DEFAULT_INITIAL_CAPACITY),
  • DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
  • putAll(m);
  • }
  • /**
  • * Returns <tt>true</tt> if this map contains no key-value mappings.
  • * 判断Map是否为空
  • *
  • * @return <tt>true</tt> if this map contains no key-value mappings
  • */
  • public boolean isEmpty() {
  • /*
  • * Sum per-segment modCounts to avoid mis-reporting when
  • * elements are concurrently added and removed in one segment
  • * while checking another, in which case the table was never
  • * actually empty at any point. (The sum ensures accuracy up
  • * through at least 1<<31 per-segment modifications before
  • * recheck.) Methods size() and containsValue() use similar
  • * constructions for stability checks.
  • */
  • long sum = 0L;
  • // 获取Segment数组
  • final Segment<K, V>[] segments = this.segments;
  • // 循环遍历Segment数组
  • for (int j = 0; j < segments.length; ++j) {
  • Segment<K, V> seg = segmentAt(segments, j);
  • if (seg != null) {
  • // 一旦某个Segment中的元素不为0,就返回false
  • if (seg.count != 0)
  • return false;
  • // 更新modCount总和,这次是加操作
  • sum += seg.modCount;
  • }
  • }
  • // 重新检查
  • if (sum != 0L) { // recheck unless no modifications
  • for (int j = 0; j < segments.length; ++j) {
  • Segment<K, V> seg = segmentAt(segments, j);
  • if (seg != null) {
  • if (seg.count != 0)
  • return false;
  • // 更新modCount总和,这次是减操作
  • sum -= seg.modCount;
  • }
  • }
  • // 二次检查后,sum应该为0,如果不为0表示有其他线程修改了Map导致元素数量发生了变化,则返回false
  • if (sum != 0L)
  • return false;
  • }
  • return true;
  • }
  • /**
  • * Returns the number of key-value mappings in this map. If the
  • * map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
  • * <tt>Integer.MAX_VALUE</tt>.
  • * 返回Map中键值对的个数,当超过Integer.MAX_VALUE时会返回Integer.MAX_VALUE
  • *
  • * @return the number of key-value mappings in this map
  • */
  • public int size() {
  • /**
  • * Try a few times to get accurate count. On failure due to continuous async changes in table, resort to locking.
  • * 多次尝试以获取准确的数量。当持续进行table的异步更新、重排和锁定时可能会失败
  • */
  • // 获取Segment数组
  • final Segment<K, V>[] segments = this.segments;
  • int size;
  • // 标识是否溢出
  • boolean overflow; // true if size overflows 32 bits
  • // 本次计算后得到的modCount
  • long sum; // sum of modCounts
  • // 上一次计算后得到的modCount
  • long last = 0L; // previous sum
  • int retries = -1; // first iteration isn't retry
  • try {
  • // 循环不断尝试
  • for (; ; ) {
  • if (retries++ == RETRIES_BEFORE_LOCK) {
  • // 遍历操作,对所有的Segment上锁
  • for (int j = 0; j < segments.length; ++j)
  • /**
  • * 获取Segment数组中索引为j位置上的Segment,并上锁
  • * 这个操作可以保证如果索引为j位置上没有Segment,就新创建一个
  • */
  • ensureSegment(j).lock(); // force creation
  • }
  • sum = 0L;
  • size = 0;
  • overflow = false;
  • // 遍历所有的Segment,叠加计算其存储的元素的数量
  • for (int j = 0; j < segments.length; ++j) {
  • // 获取Segment
  • Segment<K, V> seg = segmentAt(segments, j);
  • if (seg != null) {
  • // 更新modCount计数器
  • sum += seg.modCount;
  • int c = seg.count;
  • // 叠加并判断相关数量是否溢出
  • if (c < 0 || (size += c) < 0)
  • overflow = true;
  • }
  • }
  • /**
  • * 将当前这一次计算后的得到modCount与上一次计算后的得到modCount对比,
  • * 如果相同就跳出for循环,结束计数
  • * 这种方式可以保证计数的精确性,如果两次计算得到modCount不一致表示可能有其他的线程修改了Map导致元素数量发生了变化
  • */
  • if (sum == last)
  • break;
  • // 记录这一次计算后得到modCount
  • last = sum;
  • }
  • } finally {
  • // 循环对所有的Segment解锁
  • if (retries > RETRIES_BEFORE_LOCK) {
  • for (int j = 0; j < segments.length; ++j)
  • segmentAt(segments, j).unlock();
  • }
  • }
  • // 返回值
  • return overflow ? Integer.MAX_VALUE : size;
  • }
  • /**
  • * Returns the value to which the specified key is mapped,
  • * or {@code null} if this map contains no mapping for the key.
  • *
  • * <p>More formally, if this map contains a mapping from a key
  • * {@code k} to a value {@code v} such that {@code key.equals(k)},
  • * then this method returns {@code v}; otherwise it returns
  • * {@code null}. (There can be at most one such mapping.)
  • *
  • * 获取指定键对应的值,如果在Map中没有找到该键,将返回null
  • *
  • * @throws NullPointerException if the specified key is null
  • */
  • public V get(Object key) {
  • Segment<K, V> s; // manually integrate access methods to reduce overhead
  • HashEntry<K, V>[] tab;
  • // 哈希值
  • int h = hash(key);
  • // 获取Segment对应的索引
  • long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
  • // 获取Segment
  • if ((s = (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = s.table) != null) {
  • // 获取并遍历HashEntry链
  • for (HashEntry<K, V> e = (HashEntry<K, V>) UNSAFE.getObjectVolatile
  • (tab, ((long) (((tab.length - 1) & h)) << TSHIFT) + TBASE);
  • e != null; e = e.next) {
  • K k;
  • // 键相同,哈希值也相同
  • if ((k = e.key) == key || (e.hash == h && key.equals(k)))
  • // 返回值
  • return e.value;
  • }
  • }
  • // 否则返回null
  • return null;
  • }
  • /**
  • * Tests if the specified object is a key in this table.
  • * 判断某个键是否存在于Map中
  • *
  • * @param key possible key
  • * @return <tt>true</tt> if and only if the specified object
  • * is a key in this table, as determined by the
  • * <tt>equals</tt> method; <tt>false</tt> otherwise.
  • * @throws NullPointerException if the specified key is null 键不可为空
  • */
  • @SuppressWarnings("unchecked")
  • public boolean containsKey(Object key) {
  • Segment<K, V> s; // same as get() except no need for volatile value read
  • HashEntry<K, V>[] tab;
  • // 获取key的哈希值
  • int h = hash(key);
  • // 根据哈希值计算相应的Segment在Segment数组中的偏移量
  • long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
  • // 根据偏移量获取Segment
  • if ((s = (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = s.table) != null) {
  • // 获取到的Segment不为null,且该Segment的table不为null,就遍历该Segment的table中的HashEntry对象
  • for (HashEntry<K, V> e = (HashEntry<K, V>) UNSAFE.getObjectVolatile
  • (tab, ((long) (((tab.length - 1) & h)) << TSHIFT) + TBASE);
  • e != null; e = e.next) {
  • K k;
  • // key相同,key的hash也相同,表示找到了,就返回true;否则继续查找下一个HashEntry
  • if ((k = e.key) == key || (e.hash == h && key.equals(k)))
  • return true;
  • }
  • }
  • return false;
  • }
  • /**
  • * Returns <tt>true</tt> if this map maps one or more keys to the
  • * specified value. Note: This method requires a full internal
  • * traversal of the hash table, and so is much slower than
  • * method <tt>containsKey</tt>.
  • *
  • * 判断某个值是否存在于Map中,可能需要遍历整个Map,因此比containsKey要慢
  • *
  • * @param value value whose presence in this map is to be tested
  • * @return <tt>true</tt> if this map maps one or more keys to the
  • * specified value
  • * @throws NullPointerException if the specified value is null 值不能为null
  • */
  • public boolean containsValue(Object value) {
  • // Same idea as size() 实现方式与size()方法类似
  • // 检查value是否为空
  • if (value == null)
  • throw new NullPointerException();
  • // 获取Segment数组
  • final Segment<K, V>[] segments = this.segments;
  • // 标识是否找到
  • boolean found = false;
  • long last = 0;
  • int retries = -1;
  • try {
  • outer:
  • for (; ; ) {
  • // 先对所有Segment上锁
  • if (retries++ == RETRIES_BEFORE_LOCK) {
  • for (int j = 0; j < segments.length; ++j)
  • ensureSegment(j).lock(); // force creation
  • }
  • long hashSum = 0L;
  • int sum = 0;
  • // 遍历Segment
  • for (int j = 0; j < segments.length; ++j) {
  • HashEntry<K, V>[] tab;
  • // 获取指定索引的Segment
  • Segment<K, V> seg = segmentAt(segments, j);
  • if (seg != null && (tab = seg.table) != null) {
  • // 遍历Segment中的HashEntry链
  • for (int i = 0; i < tab.length; i++) {
  • HashEntry<K, V> e;
  • // 遍历HashEntry链
  • for (e = entryAt(tab, i); e != null; e = e.next) {
  • V v = e.value;
  • // 判断值是否相同
  • if (v != null && value.equals(v)) {
  • // 如果有相同的表示是包含这个值的
  • found = true;
  • // 跳出到外层循环
  • break outer;
  • }
  • }
  • }
  • sum += seg.modCount;
  • }
  • }
  • // 判断得到的modCount是否一致,保证精确性
  • if (retries > 0 && sum == last)
  • break;
  • last = sum;
  • }
  • } finally {
  • // 遍历Segment并解锁
  • if (retries > RETRIES_BEFORE_LOCK) {
  • for (int j = 0; j < segments.length; ++j)
  • segmentAt(segments, j).unlock();
  • }
  • }
  • return found;
  • }
  • /**
  • * Legacy method testing if some key maps into the specified value
  • * in this table. This method is identical in functionality to
  • * {@link #containsValue}, and exists solely to ensure
  • * full compatibility with class {@link java.util.Hashtable},
  • * which supported this method prior to introduction of the
  • * Java Collections framework.
  • *
  • * @param value a value to search for
  • * @return <tt>true</tt> if and only if some key maps to the
  • * <tt>value</tt> argument in this table as
  • * determined by the <tt>equals</tt> method;
  • * <tt>false</tt> otherwise
  • * @throws NullPointerException if the specified value is null
  • */
  • public boolean contains(Object value) {
  • return containsValue(value);
  • }
  • /**
  • * Maps the specified key to the specified value in this table.
  • * Neither the key nor the value can be null.
  • *
  • * <p> The value can be retrieved by calling the <tt>get</tt> method
  • * with a key that is equal to the original key.
  • *
  • * 添加键值对到Map中,键和值都不可以为null
  • *
  • * @param key key with which the specified value is to be associated
  • * @param value value to be associated with the specified key
  • * @return the previous value associated with <tt>key</tt>, or
  • * <tt>null</tt> if there was no mapping for <tt>key</tt>
  • * @throws NullPointerException if the specified key or value is null
  • */
  • @SuppressWarnings("unchecked")
  • public V put(K key, V value) {
  • Segment<K, V> s;
  • // 检查value是否为null
  • if (value == null)
  • throw new NullPointerException();
  • // 在hash()方法中如果key为null会抛出NullPointerException异常
  • int hash = hash(key);
  • // 根据hash计算对应Segment的偏移量
  • int j = (hash >>> segmentShift) & segmentMask;
  • // nonvolatile; recheck
  • // 获取对应的Segment
  • if ((s = (Segment<K, V>) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment
  • // 如果对应的Segment为空就使用ensureSegment()方法来创建
  • s = ensureSegment(j);
  • // 使用Segment的put方法添加元素,onlyIfAbsent传入的是false,即如存在就覆盖旧值
  • return s.put(key, hash, value, false);
  • }
  • /**
  • * {@inheritDoc}
  • *
  • * 该方法与{@link #put} 方法的实现基本一样
  • *
  • * @return the previous value associated with the specified key,
  • * or <tt>null</tt> if there was no mapping for the key
  • * @throws NullPointerException if the specified key or value is null
  • */
  • @SuppressWarnings("unchecked")
  • public V putIfAbsent(K key, V value) {
  • Segment<K, V> s;
  • if (value == null)
  • throw new NullPointerException();
  • int hash = hash(key);
  • int j = (hash >>> segmentShift) & segmentMask;
  • if ((s = (Segment<K, V>) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null)
  • s = ensureSegment(j);
  • // 与put方法的地方在于,此处onlyIfAbsent传入的是true,即如存在就不添加
  • return s.put(key, hash, value, true);
  • }
  • /**
  • * Copies all of the mappings from the specified map to this one.
  • * These mappings replace any mappings that this map had for any of the
  • * keys currently in the specified map.
  • *
  • * 将m中的键值对拷贝到ConcurrentHashMap中,会覆盖ConcurrentHashMap中原有的键值对
  • *
  • * @param m mappings to be stored in this map
  • */
  • public void putAll(Map<? extends K, ? extends V> m) {
  • // 循环遍历m中所有的键值对,使用put方法依次添加到ConcurrentHashMap中
  • for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
  • put(e.getKey(), e.getValue());
  • }
  • /**
  • * Removes the key (and its corresponding value) from this map.
  • * This method does nothing if the key is not in the map.
  • *
  • * @param key the key that needs to be removed
  • * @return the previous value associated with <tt>key</tt>, or
  • * <tt>null</tt> if there was no mapping for <tt>key</tt>
  • * @throws NullPointerException if the specified key is null
  • */
  • public V remove(Object key) {
  • // 根据哈希值获取Segment
  • int hash = hash(key);
  • Segment<K, V> s = segmentForHash(hash);
  • // 调用Segment的删除方法
  • return s == null ? null : s.remove(key, hash, null);
  • }
  • /**
  • * {@inheritDoc}
  • *
  • * @throws NullPointerException if the specified key is null
  • */
  • public boolean remove(Object key, Object value) {
  • int hash = hash(key);
  • Segment<K, V> s;
  • // 调用Segment的删除方法
  • return value != null && (s = segmentForHash(hash)) != null &&
  • s.remove(key, hash, value) != null;
  • }
  • /**
  • * {@inheritDoc}
  • *
  • * @throws NullPointerException if any of the arguments are null
  • */
  • public boolean replace(K key, V oldValue, V newValue) {
  • int hash = hash(key);
  • // 检查传入参数是否合法
  • if (oldValue == null || newValue == null)
  • throw new NullPointerException();
  • // 根据哈希值获取对应的Segment
  • Segment<K, V> s = segmentForHash(hash);
  • // 调用Segment的replace方法
  • return s != null && s.replace(key, hash, oldValue, newValue);
  • }
  • /**
  • * {@inheritDoc}
  • *
  • * @return the previous value associated with the specified key,
  • * or <tt>null</tt> if there was no mapping for the key
  • * @throws NullPointerException if the specified key or value is null
  • */
  • public V replace(K key, V value) {
  • int hash = hash(key);
  • // 检查传入参数是否合法
  • if (value == null)
  • throw new NullPointerException();
  • // 根据哈希值获取对应的Segment
  • Segment<K, V> s = segmentForHash(hash);
  • // 调用Segment的replace方法
  • return s == null ? null : s.replace(key, hash, value);
  • }
  • /**
  • * Removes all of the mappings from this map.
  • * 清空Map中所有的键值对
  • */
  • public void clear() {
  • final Segment<K, V>[] segments = this.segments;
  • // 遍历Segment进行清空处理
  • for (int j = 0; j < segments.length; ++j) {
  • Segment<K, V> s = segmentAt(segments, j);
  • if (s != null)
  • s.clear();
  • }
  • }
  • /**
  • * Returns a {@link Set} view of the keys contained in this map.
  • * The set is backed by the map, so changes to the map are
  • * reflected in the set, and vice-versa. The set supports element
  • * removal, which removes the corresponding mapping from this map,
  • * via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
  • * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
  • * operations. It does not support the <tt>add</tt> or
  • * <tt>addAll</tt> operations.
  • *
  • * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator
  • * that will never throw {@link ConcurrentModificationException},
  • * and guarantees to traverse elements as they existed upon
  • * construction of the iterator, and may (but is not guaranteed to)
  • * reflect any modifications subsequent to construction.
  • */
  • public Set<K> keySet() {
  • Set<K> ks = keySet;
  • return (ks != null) ? ks : (keySet = new KeySet());
  • }
  • /**
  • * Returns a {@link Collection} view of the values contained in this map.
  • * The collection is backed by the map, so changes to the map are
  • * reflected in the collection, and vice-versa. The collection
  • * supports element removal, which removes the corresponding
  • * mapping from this map, via the <tt>Iterator.remove</tt>,
  • * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
  • * <tt>retainAll</tt>, and <tt>clear</tt> operations. It does not
  • * support the <tt>add</tt> or <tt>addAll</tt> operations.
  • *
  • * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator
  • * that will never throw {@link ConcurrentModificationException},
  • * and guarantees to traverse elements as they existed upon
  • * construction of the iterator, and may (but is not guaranteed to)
  • * reflect any modifications subsequent to construction.
  • */
  • public Collection<V> values() {
  • Collection<V> vs = values;
  • return (vs != null) ? vs : (values = new Values());
  • }
  • /**
  • * Returns a {@link Set} view of the mappings contained in this map.
  • * The set is backed by the map, so changes to the map are
  • * reflected in the set, and vice-versa. The set supports element
  • * removal, which removes the corresponding mapping from the map,
  • * via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
  • * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
  • * operations. It does not support the <tt>add</tt> or
  • * <tt>addAll</tt> operations.
  • *
  • * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator
  • * that will never throw {@link ConcurrentModificationException},
  • * and guarantees to traverse elements as they existed upon
  • * construction of the iterator, and may (but is not guaranteed to)
  • * reflect any modifications subsequent to construction.
  • */
  • public Set<Map.Entry<K, V>> entrySet() {
  • Set<Map.Entry<K, V>> es = entrySet;
  • return (es != null) ? es : (entrySet = new EntrySet());
  • }
  • /**
  • * Returns an enumeration of the keys in this table.
  • *
  • * @return an enumeration of the keys in this table
  • * @see #keySet()
  • */
  • public Enumeration<K> keys() {
  • return new KeyIterator();
  • }
  • /**
  • * Returns an enumeration of the values in this table.
  • *
  • * @return an enumeration of the values in this table
  • * @see #values()
  • */
  • public Enumeration<V> elements() {
  • return new ValueIterator();
  • }
  • /* ---------------- Iterator Support -------------- */
  • abstract class HashIterator {
  • // ConcurrentHashMap里下一个Segment的索引
  • int nextSegmentIndex;
  • // 下一个HashEntry链的索引
  • int nextTableIndex;
  • // 当前的HashEntry链
  • HashEntry<K, V>[] currentTable;
  • // 下一个HashEntry
  • HashEntry<K, V> nextEntry;
  • // 最后返回
  • HashEntry<K, V> lastReturned;
  • HashIterator() {
  • // 初始化nextSegmentIndex为Segment数组最大索引
  • nextSegmentIndex = segments.length - 1;
  • nextTableIndex = -1;
  • advance();
  • }
  • /**
  • * Set nextEntry to first node of next non-empty table
  • * (in backwards order, to simplify checks).
  • * 设置nextEntry为下一个不为空的HashEntry链的头节点
  • */
  • final void advance() {
  • for (; ; ) {
  • if (nextTableIndex >= 0) {
  • // 获取table中的HashEntry,,即在table数组中从后向前的顺序依次获取
  • if ((nextEntry = entryAt(currentTable, nextTableIndex--)) != null)
  • break;
  • } else if (nextSegmentIndex >= 0) {
  • // 获取Segment,即在Segment数组中从后向前的顺序依次获取
  • Segment<K, V> seg = segmentAt(segments, nextSegmentIndex--);
  • // 获取Segment相应的table数组
  • if (seg != null && (currentTable = seg.table) != null)
  • // 将nextTableIndex置为table数组最大索引
  • nextTableIndex = currentTable.length - 1;
  • } else
  • break;
  • }
  • }
  • final HashEntry<K, V> nextEntry() {
  • // 获取nextEntry为返回的HashEntry
  • HashEntry<K, V> e = nextEntry;
  • if (e == null)
  • throw new NoSuchElementException();
  • lastReturned = e; // cannot assign until after null check
  • if ((nextEntry = e.next) == null)
  • advance();
  • return e;
  • }
  • // 判断是否有下一个元素
  • public final boolean hasNext() {
  • return nextEntry != null;
  • }
  • // 判断是否还有更多的元素
  • public final boolean hasMoreElements() {
  • return nextEntry != null;
  • }
  • public final void remove() {
  • if (lastReturned == null)
  • throw new IllegalStateException();
  • // 从ConcurrentHashMap中移除当前遍历到的元素
  • ConcurrentHashMap.this.remove(lastReturned.key);
  • lastReturned = null;
  • }
  • }
  • // 键迭代器
  • final class KeyIterator
  • extends HashIterator
  • implements Iterator<K>, Enumeration<K> {
  • public final K next() {
  • return super.nextEntry().key;
  • }
  • public final K nextElement() {
  • return super.nextEntry().key;
  • }
  • }
  • // 值迭代器
  • final class ValueIterator
  • extends HashIterator
  • implements Iterator<V>, Enumeration<V> {
  • public final V next() {
  • return super.nextEntry().value;
  • }
  • public final V nextElement() {
  • return super.nextEntry().value;
  • }
  • }
  • /**
  • * Custom Entry class used by EntryIterator.next(), that relays
  • * setValue changes to the underlying map.
  • */
  • final class WriteThroughEntry
  • extends AbstractMap.SimpleEntry<K, V> {
  • WriteThroughEntry(K k, V v) {
  • super(k, v);
  • }
  • /**
  • * Set our entry's value and write through to the map. The
  • * value to return is somewhat arbitrary here. Since a
  • * WriteThroughEntry does not necessarily track asynchronous
  • * changes, the most recent "previous" value could be
  • * different from what we return (or could even have been
  • * removed in which case the put will re-establish). We do not
  • * and cannot guarantee more.
  • */
  • public V setValue(V value) {
  • if (value == null) throw new NullPointerException();
  • V v = super.setValue(value);
  • ConcurrentHashMap.this.put(getKey(), value);
  • return v;
  • }
  • }
  • final class EntryIterator
  • extends HashIterator
  • implements Iterator<Entry<K, V>> {
  • public Map.Entry<K, V> next() {
  • HashEntry<K, V> e = super.nextEntry();
  • return new WriteThroughEntry(e.key, e.value);
  • }
  • }
  • // 键集类
  • final class KeySet extends AbstractSet<K> {
  • public Iterator<K> iterator() {
  • return new KeyIterator();
  • }
  • public int size() {
  • return ConcurrentHashMap.this.size();
  • }
  • public boolean isEmpty() {
  • return ConcurrentHashMap.this.isEmpty();
  • }
  • public boolean contains(Object o) {
  • return ConcurrentHashMap.this.containsKey(o);
  • }
  • public boolean remove(Object o) {
  • return ConcurrentHashMap.this.remove(o) != null;
  • }
  • public void clear() {
  • ConcurrentHashMap.this.clear();
  • }
  • }
  • // 值集类
  • final class Values extends AbstractCollection<V> {
  • public Iterator<V> iterator() {
  • return new ValueIterator();
  • }
  • public int size() {
  • return ConcurrentHashMap.this.size();
  • }
  • public boolean isEmpty() {
  • return ConcurrentHashMap.this.isEmpty();
  • }
  • public boolean contains(Object o) {
  • return ConcurrentHashMap.this.containsValue(o);
  • }
  • public void clear() {
  • ConcurrentHashMap.this.clear();
  • }
  • }
  • final class EntrySet extends AbstractSet<Map.Entry<K, V>> {
  • public Iterator<Map.Entry<K, V>> iterator() {
  • return new EntryIterator();
  • }
  • public boolean contains(Object o) {
  • if (!(o instanceof Map.Entry))
  • return false;
  • Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
  • V v = ConcurrentHashMap.this.get(e.getKey());
  • return v != null && v.equals(e.getValue());
  • }
  • public boolean remove(Object o) {
  • if (!(o instanceof Map.Entry))
  • return false;
  • Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
  • return ConcurrentHashMap.this.remove(e.getKey(), e.getValue());
  • }
  • public int size() {
  • return ConcurrentHashMap.this.size();
  • }
  • public boolean isEmpty() {
  • return ConcurrentHashMap.this.isEmpty();
  • }
  • public void clear() {
  • ConcurrentHashMap.this.clear();
  • }
  • }
  • /* ---------------- Serialization Support -------------- */
  • /**
  • * Save the state of the <tt>ConcurrentHashMap</tt> instance to a
  • * stream (i.e., serialize it).
  • *
  • * 序列化写方法
  • *
  • * @param s the stream
  • * @serialData the key (Object) and value (Object)
  • * for each key-value mapping, followed by a null pair.
  • * The key-value mappings are emitted in no particular order.
  • */
  • private void writeObject(java.io.ObjectOutputStream s) throws IOException {
  • // force all segments for serialization compatibility
  • for (int k = 0; k < segments.length; ++k)
  • ensureSegment(k);
  • // 调用缺省序列化写
  • s.defaultWriteObject();
  • // 遍历Segment数组
  • final Segment<K, V>[] segments = this.segments;
  • for (int k = 0; k < segments.length; ++k) {
  • Segment<K, V> seg = segmentAt(segments, k);
  • // 上锁
  • seg.lock();
  • try {
  • // 遍历Segment的table
  • HashEntry<K, V>[] tab = seg.table;
  • for (int i = 0; i < tab.length; ++i) {
  • HashEntry<K, V> e;
  • // 分别写出HashEntry的键和值
  • for (e = entryAt(tab, i); e != null; e = e.next) {
  • s.writeObject(e.key);
  • s.writeObject(e.value);
  • }
  • }
  • } finally {
  • // 解锁
  • seg.unlock();
  • }
  • }
  • // 写出两个null
  • s.writeObject(null);
  • s.writeObject(null);
  • }
  • /**
  • * Reconstitute the <tt>ConcurrentHashMap</tt> instance from a
  • * stream (i.e., deserialize it).
  • *
  • * 序列化读方法
  • *
  • * @param s the stream
  • */
  • @SuppressWarnings("unchecked")
  • private void readObject(java.io.ObjectInputStream s)
  • throws IOException, ClassNotFoundException {
  • s.defaultReadObject();
  • // set hashMask
  • UNSAFE.putIntVolatile(this, HASHSEED_OFFSET, randomHashSeed(this));
  • // Re-initialize segments to be minimally sized, and let grow.
  • int cap = MIN_SEGMENT_TABLE_CAPACITY;
  • final Segment<K, V>[] segments = this.segments;
  • for (int k = 0; k < segments.length; ++k) {
  • Segment<K, V> seg = segments[k];
  • if (seg != null) {
  • seg.threshold = (int) (cap * seg.loadFactor);
  • seg.table = (HashEntry<K, V>[]) new HashEntry[cap];
  • }
  • }
  • // Read the keys and values, and put the mappings in the table
  • for (; ; ) {
  • K key = (K) s.readObject();
  • V value = (V) s.readObject();
  • if (key == null)
  • break;
  • put(key, value);
  • }
  • }
  • // Unsafe mechanics
  • private static final sun.misc.Unsafe UNSAFE;
  • // arrayBaseOffset方法是一个本地方法,可以获取数组第一个元素的偏移地址
  • // 第一个Segment的地址偏移量
  • private static final long SBASE;
  • // arrayIndexScale方法是一个本地方法,可以计算数组中元素的增量地址
  • // SSHIFT是Segment数组中元素的增量地址
  • private static final int SSHIFT;
  • // 第一个HashEntry的地址偏移量
  • private static final long TBASE;
  • // SSHIFT是HashEntry数组中元素的增量地址
  • private static final int TSHIFT;
  • // hashSeed的偏移量
  • private static final long HASHSEED_OFFSET;
  • static {
  • int ss, ts;
  • try {
  • UNSAFE = sun.misc.Unsafe.getUnsafe();
  • Class tc = HashEntry[].class;
  • Class sc = Segment[].class;
  • TBASE = UNSAFE.arrayBaseOffset(tc);
  • SBASE = UNSAFE.arrayBaseOffset(sc);
  • ts = UNSAFE.arrayIndexScale(tc);
  • ss = UNSAFE.arrayIndexScale(sc);
  • HASHSEED_OFFSET = UNSAFE.objectFieldOffset(ConcurrentHashMap.class.getDeclaredField("hashSeed"));
  • } catch (Exception e) {
  • throw new Error(e);
  • }
  • // ss和ts必然是2的次方
  • if ((ss & (ss - 1)) != 0 || (ts & (ts - 1)) != 0)
  • throw new Error("data type scale not a power of two");
  • // Segemnt数组中元素的增量地址值,按位数据转换后除掉高位补0后非零的位数
  • SSHIFT = 31 - Integer.numberOfLeadingZeros(ss);
  • // HashEntry数组中元素的增量地址值,按位数据转换后除掉高位补0后非零的位数
  • TSHIFT = 31 - Integer.numberOfLeadingZeros(ts);
  • }
  • }