ConcurrentHashMap深入剖析(基于JDK1.7)

   1 package cn.com.pep.concurrent;
   2
   3 import java.util.concurrent.ConcurrentMap;
   4 import java.util.concurrent.locks.*;
   5
   6 import java.util.*;
   7 import java.io.Serializable;
   8 import java.io.IOException;
   9
  10 /**
  11  * A hash table supporting full concurrency of retrievals and adjustable
  12  * expected concurrency for updates. This class obeys the same functional
  13  * specification as {@link java.util.Hashtable}, and includes versions of
  14  * methods corresponding to each method of Hashtable. However, even
  15  * though all operations are thread-safe, retrieval operations do not
  16  * entail locking, and there is not any support for locking the entire
  17  * table in a way that prevents all access. This class is fully interoperable
  18  * with Hashtable in programs that rely on its thread safety but not on
  19  * its synchronization details.

  20  *
  21  * 

22 * Retrieval operations (including get) generally do not block, so may 23 * overlap with update operations (including put and remove). 24 * Retrievals reflect the results of the most recently completed update 25 * operations holding upon their onset. For aggregate operations such as 26 * putAll and clear, concurrent retrievals may reflect 27 * insertion or removal of only some entries. Similarly, Iterators and 28 * Enumerations return elements reflecting the state of the hash table at some 29 * point at or since the creation of the iterator/enumeration. They do 30 * not throw {@link ConcurrentModificationException}. However, 31 * iterators are designed to be used by only one thread at a time. 32 * 33 *

34 * The allowed concurrency among update operations is guided by the optional 35 * concurrencyLevel constructor argument (default 16), which 36 * is used as a hint for internal sizing. The table is internally partitioned to 37 * try to permit the indicated number of concurrent updates without contention. 38 * Because placement in hash tables is essentially random, the actual 39 * concurrency will vary. Ideally, you should choose a value to accommodate as 40 * many threads as will ever concurrently modify the table. Using a 41 * significantly higher value than you need can waste space and time, and a 42 * significantly lower value can lead to thread contention. But overestimates 43 * and underestimates within an order of magnitude do not usually have much 44 * noticeable impact. A value of one is appropriate when it is known that only 45 * one thread will modify and all others will only read. Also, resizing this or 46 * any other kind of hash table is a relatively slow operation, so, when 47 * possible, it is a good idea to provide estimates of expected table sizes in 48 * constructors. 49 * 50 *

51 * This class and its views and iterators implement all of the optional 52 * methods of the {@link Map} and {@link Iterator} interfaces. 53 * 54 *

55 * Like {@link Hashtable} but unlike {@link HashMap}, this class does 56 * not allow null to be used as a key or value. 57 * 58 *

59 * This class is a member of the 60 * Java 61 * Collections Framework. 62 * 63 * @since 1.5 64 * @author Doug Lea 65 * @param the type of keys maintained by this map 66 * @param the type of mapped values 67 */ 68 public class ConcurrentHashMap extends AbstractMap implements ConcurrentMap, Serializable { 69 private static final long serialVersionUID = 7249069246763182397L; 70 71 /* 72 * The basic strategy is to subdivide the table among Segments, each of which 73 * itself is a concurrently readable hash table. To reduce footprint, all but 74 * one segments are constructed only when first needed (see ensureSegment). To 75 * maintain visibility in the presence of lazy construction, accesses to 76 * segments as well as elements of segment's table must use volatile access, 77 * which is done via Unsafe within methods segmentAt etc below. These provide 78 * the functionality of AtomicReferenceArrays but reduce the levels of 79 * indirection. Additionally, volatile-writes of table elements and entry "next" 80 * fields within locked operations use the cheaper "lazySet" forms of writes 81 * (via putOrderedObject) because these writes are always followed by lock 82 * releases that maintain sequential consistency of table updates. 83 * 84 * Historical note: The previous version of this class relied heavily on "final" 85 * fields, which avoided some volatile reads at the expense of a large initial 86 * footprint. Some remnants of that design (including forced construction of 87 * segment 0) exist to ensure serialization compatibility. 88 */ 89 90 /* ---------------- Constants -------------- */ 91 92 /** 93 * The default initial capacity for this table, used when not otherwise 94 * specified in a constructor. 95 */ 96 static final int DEFAULT_INITIAL_CAPACITY = 16; 97 98 /** 99 * The default load factor for this table, used when not otherwise specified in 100 * a constructor. 101 */ 102 static final float DEFAULT_LOAD_FACTOR = 0.75f; 103 104 /** 105 * The default concurrency level for this table, used when not otherwise 106 * specified in a constructor. 107 */ 108 static final int DEFAULT_CONCURRENCY_LEVEL = 16; 109 110 /** 111 * The maximum capacity, used if a higher value is implicitly specified by 112 * either of the constructors with arguments. MUST be a power of two 113 * ensure that entries are indexable using ints. 114 */ 115 static final int MAXIMUM_CAPACITY = 1 << 30; 116 117 /** 118 * The minimum capacity for per-segment tables. Must be a power of two, at least 119 * two to avoid immediate resizing on next use after lazy construction. 120 */ 121 static final int MIN_SEGMENT_TABLE_CAPACITY = 2; 122 123 /** 124 * The maximum number of segments to allow; used to bound constructor arguments. 125 * Must be power of two less than 1 << 24. 126 */ 127 static final int MAX_SEGMENTS = 1 << 16; // slightly conservative 128 129 /** 130 * Number of unsynchronized retries in size and containsValue methods before 131 * resorting to locking. This is used to avoid unbounded retries if tables 132 * undergo continuous modification which would make it impossible to obtain an 133 * accurate result. 134 */ 135 static final int RETRIES_BEFORE_LOCK = 2; 136 137 /* ---------------- Fields -------------- */ 138 139 /** 140 * holds values which can't be initialized until after VM is booted. 141 */ 142 private static class Holder { 143 144 /** 145 * Enable alternative hashing of String keys? 146 * 147 *

148 * Unlike the other hash map implementations we do not implement a threshold for 149 * regulating whether alternative hashing is used for String keys. Alternative 150 * hashing is either enabled for all instances or disabled for all instances. 151 */ 152 static final boolean ALTERNATIVE_HASHING; 153 154 static { 155 // Use the "threshold" system property even though our threshold 156 // behaviour is "ON" or "OFF". 157 String altThreshold = java.security.AccessController 158 .doPrivileged(new sun.security.action.GetPropertyAction("jdk.map.althashing.threshold")); 159 160 int threshold; 161 try { 162 threshold = (null != altThreshold) ? Integer.parseInt(altThreshold) : Integer.MAX_VALUE; 163 164 // disable alternative hashing if -1 165 if (threshold == -1) { 166 threshold = Integer.MAX_VALUE; 167 } 168 169 if (threshold < 0) { 170 throw new IllegalArgumentException("value must be positive integer."); 171 } 172 } catch (IllegalArgumentException failed) { 173 throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed); 174 } 175 ALTERNATIVE_HASHING = threshold MAXIMUM_CAPACITY;
176 } 177 } 178 179 /** 180 * A randomizing value associated with this instance that is applied to hash 181 * code of keys to make hash collisions harder to find. 182 */ 183 private transient final int hashSeed = randomHashSeed(this); 184 185 private static int randomHashSeed(ConcurrentHashMap instance) { 186 if (sun.misc.VM.isBooted() && Holder.ALTERNATIVE_HASHING) { 187 return sun.misc.Hashing.randomHashSeed(instance); 188 } 189 190 return 0; 191 } 192 193 /** 194 * Mask value for indexing into segments. The upper bits of a key's hash code 195 * are used to choose the segment. 196 */ 197 final int segmentMask; 198 199 /** 200 * Shift value for indexing within segments. 201 */ 202 final int segmentShift; 203 204 /** 205 * The segments, each of which is a specialized hash table. 206 */ 207 final Segment[] segments; 208 209 transient Set keySet; 210 transient Set> entrySet; 211 transient Collection values; 212 213 /** 214 * ConcurrentHashMap list entry. Note that this is never exported out as a 215 * user-visible Map.Entry. 216 */ 217 static final class HashEntry { 218 final int hash; 219 final K key; 220 volatile V value; 221 volatile HashEntry next; 222 223 HashEntry(int hash, K key, V value, HashEntry next) { 224 this.hash = hash; 225 this.key = key; 226 this.value = value; 227 this.next = next; 228 } 229 230 /** 231 * Sets next field with volatile write semantics. (See above about use of 232 * putOrderedObject.) 233 */ 234 final void setNext(HashEntry n) { 235 UNSAFE.putOrderedObject(this, nextOffset, n); 236 } 237 238 // Unsafe mechanics 239 static final sun.misc.Unsafe UNSAFE; 240 static final long nextOffset; 241 static { 242 try { 243 UNSAFE = sun.misc.Unsafe.getUnsafe(); 244 Class k = HashEntry.class; 245 nextOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("next")); 246 } catch (Exception e) { 247 throw new Error(e); 248 } 249 } 250 } 251 252 /** 253 * Gets the ith element of given table (if nonnull) with volatile read 254 * semantics. Note: This is manually integrated into a few performance-sensitive 255 * methods to reduce call overhead. 256 */ 257 @SuppressWarnings("unchecked") 258 static final HashEntry entryAt(HashEntry[] tab, int i) { 259 /* 获取HashEntry数组中,指定下标的头结点 */ 260 return (HashEntry) (tab == null ? null : UNSAFE.getObjectVolatile(tab, i << TSHIFT + TBASE)); 261 } 262 263 /** 264 * Sets the ith element of given table, with volatile write semantics. (See 265 * above about use of putOrderedObject.) 266 */ 267 static final void setEntryAt(HashEntry[] tab, int i, HashEntry e) { 268 UNSAFE.putOrderedObject(tab, (i << TSHIFT) + TBASE, e); 269 } 270 271 /** 272 * Applies a supplemental hash function to a given hashCode, which defends 273 * against poor quality hash functions. This is critical because 274 * ConcurrentHashMap uses power-of-two length hash tables, that otherwise 275 * encounter collisions for hashCodes that do not differ in lower or upper bits. 276 */ 277 private int hash(Object k) { 278 int h = hashSeed; 279 280 if ((0 != h) && (k instanceof String)) { 281 return sun.misc.Hashing.stringHash32((String) k); 282 } 283 284 h ^= k.hashCode(); 285 286 // Spread bits to regularize both segment and index locations, 287 // using variant of single-word Wang/Jenkins hash. 288 h += (h << 15) ^ 0xffffcd7d; 289 h ^= (h >>> 10); 290 h += (h << 3); 291 h ^= (h >>> 6); 292 h += (h << 2) + (h << 14); 293 return h ^ (h >>> 16); 294 } 295 296 /** 297 * Segments are specialized versions of hash tables. This subclasses from 298 * ReentrantLock opportunistically, just to simplify some locking and avoid 299 * separate construction. 300 */ 301 static final class Segment extends ReentrantLock implements Serializable { 302 /* 303 * Segments maintain a table of entry lists that are always kept in a consistent 304 * state, so can be read (via volatile reads of segments and tables) without 305 * locking. This requires replicating nodes when necessary during table 306 * resizing, so the old lists can be traversed by readers still using old 307 * version of table. 308 * 309 * This class defines only mutative methods requiring locking. Except as noted, 310 * the methods of this class perform the per-segment versions of 311 * ConcurrentHashMap methods. (Other methods are integrated directly into 312 * ConcurrentHashMap methods.) These mutative methods use a form of controlled 313 * spinning on contention via methods scanAndLock and scanAndLockForPut. These 314 * intersperse tryLocks with traversals to locate nodes. The main benefit is to 315 * absorb cache misses (which are very common for hash tables) while obtaining 316 * locks so that traversal is faster once acquired. We do not actually use the 317 * found nodes since they must be re-acquired under lock anyway to ensure 318 * sequential consistency of updates (and in any case may be undetectably 319 * stale), but they will normally be much faster to re-locate. Also, 320 * scanAndLockForPut speculatively creates a fresh node to use in put if no node 321 * is found. 322 */ 323 324 private static final long serialVersionUID = 2249069246763182397L; 325 326 /** 327 * The maximum number of times to tryLock in a prescan before possibly blocking 328 * on acquire in preparation for a locked segment operation. On multiprocessors, 329 * using a bounded number of retries maintains cache acquired while locating 330 * nodes. 331 */ 332 static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1; 333 334 /** 335 * The per-segment table. Elements are accessed via entryAt/setEntryAt providing 336 * volatile semantics. 337 */ 338 transient volatile HashEntry[] table; 339 340 /** 341 * The number of elements. Accessed only either within locks or among other 342 * volatile reads that maintain visibility. 343 */ 344 transient int count;//当前segment中包含节点元素的数量 345 346 /** 347 * The total number of mutative operations in this segment. Even though this may 348 * overflows 32 bits, it provides sufficient accuracy for stability checks in 349 * CHM isEmpty() and size() methods. Accessed only either within locks or among 350 * other volatile reads that maintain visibility. 351 */ 352 transient int modCount;//发生写操作的次数 353 354 /** 355 * The table is rehashed when its size exceeds this threshold. (The value of 356 * this field is always (int)(capacity * 357 * loadFactor).) 358 */ 359 transient int threshold;//进行扩容的阈值 360 361 /** 362 * The load factor for the hash table. Even though this value is same for all 363 * segments, it is replicated to avoid needing links to outer object. 364 * 365 * @serial 366 */ 367 final float loadFactor;//加载因子 368 369 Segment(float lf, int threshold, HashEntry[] tab) { 370 this.loadFactor = lf; 371 this.threshold = threshold; 372 this.table = tab; 373 } 374 375 final V put(K key, int hash, V value, boolean onlyIfAbsent) { 376 /* Try get lock for synchronized */ 377 HashEntry node = tryLock() ? null : scanAndLockForPut(key, hash, value); 378 V oldValue = null; 379 try { 380 HashEntry[] tab = table; 381 int index = (tab.length - 1) & hash; 382 /* 获取头结点 */ 383 HashEntry first = entryAt(tab, index); 384 for (HashEntry e = first;;) { 385 if (e != null) { 386 K k; 387 if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { 388 oldValue = e.value; 389 if (!onlyIfAbsent) { 390 e.value = value; 391 ++modCount; 392 } 393 break; 394 } 395 e = e.next; 396 } else { 397 /* 获取锁的过程中,创建了node节点,将node直接链接到当前这条链的头结点,作为新的头结点 */ 398 if (node != null) { 399 node.next = first; 400 } else { 401 /* 否则,创建新的头结点 */ 402 node = new HashEntry(hash, key, value, first); 403 } 404 405 int c = count + 1; 406 /* 判断是否需要重新当前Segment中的数组是否需要重新hash */ 407 if (c > threshold && tab.length < MAXIMUM_CAPACITY) { 408 rehash(node); 409 } else { 410 setEntryAt(tab, index, node); 411 } 412 ++modCount; 413 count = c; 414 oldValue = null; 415 break; 416 } 417 } 418 } finally { 419 unlock(); 420 } 421 422 return oldValue; 423 } 424 425 /** 426 * Doubles size of table and repacks entries, also adding the given node to new 427 * table 428 */ 429 @SuppressWarnings("unchecked") 430 private void rehash(HashEntry node) { 431 HashEntry[] oldTable = table; 432 int oldCapacity = oldTable.length; 433 int newCapacity = oldCapacity << 1; 434 threshold = (int) (newCapacity * loadFactor); 435 HashEntry[] newTable = new HashEntry[newCapacity]; 436 int sizeMask = newCapacity - 1; 437 for (int i = 0; i < oldCapacity; i++) { 438 HashEntry e = oldTable[i]; 439 if (e != null) { 440 HashEntry next = e.next; 441 int idx = e.hash & sizeMask; 442 /* 说明这条链上有且仅有一个结点 */ 443 if (next == null) { 444 newTable[idx] = e; 445 } else { 446 HashEntry lastRun = e; 447 int lastIdx = idx; 448 /*寻找扩容之后,不再原来这条链上的最后一个节点lastRun,这个节点之后的所有元素也必定不再原来链上,也必须转移到新的链上*/ 449 for (HashEntry last = next; last != null; last = last.next) { 450 int k = last.hash & sizeMask; 451 if (k != lastIdx) { 452 lastIdx = k; 453 lastRun = last; 454 } 455 } 456 457 /*遍历完成之后这个lastIdx就是新链在HashEntry中的下标*/ 458 459 /* 460 * 重点注意: 461 * 1、因为HashEntry[]的扩容是倍增的,始终是2的幂次方,计算某个HashEntry所在HashEntry[]数组中下标的算法为:i = (e.hash >> segmentShift) & segmentMask; 462 * 2、假设扩容之前HashEntry[]的长度为2的k次方,要确定某个hashEntry在HashEntry[]中的下标m,按照上面算法,m只与hash值的后k位有关; 463 * 3、扩容之后HashEntry[]的长度变为了2的k+1次方,又因为hash、segmentShift、SegmentMask均为final,所以计算的新下标n只与hash值的后k+1位有关; 464 * 4、第2步和第3步的的后k位和后k+1位,差异只在第k+1位,这位要么为0,要么为1,所以扩容前后,节点的所在数组中的下标有如下关系:m == n 或者 m+(k< 465 * 5、根据第4步得出的结论,扩容之前每一条链上的HashEntry节点扩容之后只可能有两种分布情况:a、还分布在原来的下标为m链上;b、分布在新的m+(k< 466 */ 467 468 /*根据最上面第5步得出的结论,扩容之前不同的链上的元素扩容之后是不可能分布在同一条链上的,所以就有如下赋值操作 ,将lastRun之后的所有元素赋值到新的链上,也有可能旧链*/ 469 newTable[lastIdx] = lastRun; 470 /* 将lastRun之前的节点采用头插法插入到链表的头部 */ 471 for (HashEntry p = e; p != lastRun; p = p.next) { 472 V v = p.value; 473 int h = p.hash; 474 int k = h & sizeMask; 475 HashEntry n = newTable[k]; 476 newTable[k] = new HashEntry(h, p.key, v, n); 477 } 478 } 479 } 480 } 481 /* 重置这条链的头结点 */ 482 int nodeIndex = node.hash & sizeMask; 483 node.setNext(newTable[nodeIndex]); 484 newTable[nodeIndex] = node; 485 table = newTable; 486 } 487 488 /** 489 * Scans for a node containing given key while trying to acquire lock, creating 490 * and returning one if not found. Upon return, guarantees that lock is held. 491 * UNlike in most methods, calls to method equals are not screened: Since 492 * traversal speed doesn't matter, we might as well help warm up the associated 493 * code and accesses as well. 494 * 495 * @return a new node if key not found, else null 496 */ 497 private HashEntry scanAndLockForPut(K key, int hash, V value) { 498 /* 自旋的过程中尝试获取这个节点 */ 499 HashEntry first = entryForHash(this, hash); 500 HashEntry e = first; 501 HashEntry node = null; 502 int retries = -1; 503 while (!tryLock()) { 504 HashEntry f; 505 if (retries < 0) { 506 if (e == null) { 507 /* 在自旋尝试的过程中,当前这条hashEntry链遍历完成了,但是没有找到这个节点就新建一个结点 */ 508 if (node == null) { 509 node = new HashEntry(hash, key, value, null); 510 retries = 0; 511 } 512 } else if (key.equals(e.key)) { 513 retries = 0; 514 } else { 515 e = e.next; 516 } 517 } else if (++retries > RETRIES_BEFORE_LOCK) { 518 /* 自旋达到了最大次数,则不再自旋获取,阻塞等待 */ 519 lock(); 520 break; 521 } else if (((retries & 1) == 0) && (f = entryForHash(this, hash)) != first) { 522 /* 自旋的过程中,头结点发生了变化,说明发生了并发的写操作,重新尝试 */ 523 e = first = f; 524 retries = -1; 525 } 526 } 527 528 /* 第一次尝试就获取到了锁,或者这条链还没遍历完就获取到了锁则node为空 */ 529 return node; 530 } 531 532 /** 533 * Scans for a node containing the given key while trying to acquire lock for a 534 * remove or replace operation. Upon return, guarantees that lock is held. Note 535 * that we must lock even if the key is not found, to ensure sequential 536 * consistency of updates. 537 */ 538 private void scanAndLock(Object key, int hash) { 539 // similar to but simpler than scanAndLockForPut 540 HashEntry first = entryForHash(this, hash); 541 HashEntry e = first; 542 int retries = -1; 543 /* 自旋的时候做一些遍历操作,降低cpu的使用率 */ 544 while (!tryLock()) { 545 HashEntry f; 546 if (retries < 0) { 547 if (e == null || key.equals(e.key)) { 548 retries = 0; 549 } 550 e = e.next; 551 } else if (++retries > MAX_SCAN_RETRIES) { 552 lock(); 553 break; 554 } else if (((retries & 1) == 0) && ((f = entryForHash(this, hash)) != first)) { 555 /* 尝试获取锁的过程中 */ 556 e = first = f; 557 retries = -1; 558 } 559 } 560 } 561 562 /** 563 * Remove; match on key only if value null, else match both. 564 */ 565 final V remove(Object key, int hash, Object value) { 566 V oldValue = null; 567 if (!tryLock()) { 568 scanAndLock(key, hash); 569 } 570 try { 571 HashEntry[] tab = table; 572 int index = (tab.length - 1) & hash; 573 /* current node */ 574 HashEntry e = entryAt(tab, index); 575 /* previous node before current node */ 576 HashEntry prev = null; 577 while (e != null) { 578 K k; 579 HashEntry next = e.next; 580 if ((k = e.key) == key || ((e.hash == hash) && k.equals(key))) { 581 V v = e.value; 582 /* Value为null的时候只匹配key,否则key和value都需要匹配 */ 583 if (value == null || value == v || value.equals(value)) { 584 if (prev == null) { 585 setEntryAt(tab, index, next); 586 } else { 587 prev.setNext(next); 588 } 589 modCount++; 590 count--; 591 oldValue = v; 592 } 593 break; 594 } 595 prev = e; 596 e = next; 597 } 598 } finally { 599 unlock(); 600 } 601 return oldValue; 602 } 603 604 final boolean replace(K key, int hash, V oldValue, V newValue) { 605 boolean repalced = false; 606 if (!tryLock()) { 607 /* 获取锁失败,进行一定次数的自旋获取,还未成功则阻塞获取 */ 608 scanAndLock(key, hash); 609 } 610 611 try { 612 for (HashEntry e = entryForHash(this, hash); e != null; e = e.next) { 613 K k; 614 if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { 615 /* key相等,并且oldValue值等于原来位置的值才进行替换操作 */ 616 if (oldValue.equals(e.value)) { 617 e.value = newValue; 618 ++modCount; 619 repalced = true; 620 } 621 break; 622 } 623 } 624 } finally { 625 unlock(); 626 } 627 return repalced; 628 } 629 630 final V replace(K key, int hash, V value) { 631 V oldValue = null; 632 if (!tryLock()) { 633 scanAndLock(key, hash); 634 } 635 636 try { 637 HashEntry e; 638 for (e = entryForHash(this, hash); e != null; e = e.next) { 639 K k; 640 if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { 641 oldValue = e.value; 642 e.value = value; 643 ++modCount; 644 break; 645 } 646 } 647 } finally { 648 unlock(); 649 } 650 return oldValue; 651 } 652 653 final void clear() { 654 lock(); 655 try { 656 HashEntry[] tab = table; 657 for (int i = 0; i < tab.length; i++) 658 setEntryAt(tab, i, null); 659 ++modCount; 660 count = 0; 661 } finally { 662 unlock(); 663 } 664 } 665 } 666 667 // Accessing segments 668 669 /** 670 * Gets the jth element of given segment array (if nonnull) with volatile 671 * element access semantics via Unsafe. (The null check can trigger harmlessly 672 * only during deserialization.) Note: because each element of segments array is 673 * set only once (using fully ordered writes), some performance-sensitive 674 * methods rely on this method only as a recheck upon null reads. 675 */ 676 @SuppressWarnings("unchecked") 677 static final Segment segmentAt(Segment[] ss, int j) { 678 long u = (j << SSHIFT) + SBASE; 679 return ss == null ? null : (Segment) UNSAFE.getObjectVolatile(ss, u); 680 } 681 682 /** 683 * Returns the segment for the given index, creating it and recording in segment 684 * table (via CAS) if not already present. 685 * 686 * @param k the index 687 * @return the segment 688 */ 689 @SuppressWarnings("unchecked") 690 private Segment ensureSegment(int k) { 691 /* 给当前位置创建一个新的segment */ 692 Segment[] ss = this.segments; 693 long u = (k << SSHIFT) + SBASE; 694 Segment seg; 695 if ((seg = (Segment) UNSAFE.getObjectVolatile(ss, u)) == null) { 696 /* 以segment[0]作为原型来初始化当前位置的segment */ 697 Segment proto = ss[0]; 698 int len = proto.table.length; 699 float lf = proto.loadFactor; 700 int threshold = (int) (len * lf); 701 HashEntry[] tab = new HashEntry[len]; 702 if ((seg = (Segment) UNSAFE.getObjectVolatile(ss, u)) == null) { 703 Segment segment = new Segment<>(lf, threshold, tab); 704 while ((seg = (Segment) UNSAFE.getObjectVolatile(ss, u)) == null) { 705 if (UNSAFE.compareAndSwapObject(ss, u, null, seg = segment)) { 706 break; 707 } 708 } 709 } 710 } 711 return seg; 712 } 713 714 // Hash-based segment and entry accesses 715 716 /** 717 * Get the segment for the given hash 718 */ 719 @SuppressWarnings("unchecked") 720 private Segment segmentForHash(int h) { 721 long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; 722 return (Segment) UNSAFE.getObjectVolatile(segments, u); 723 } 724 725 /** 726 * Gets the table entry for the given segment and hash 727 */ 728 @SuppressWarnings("unchecked") 729 static final HashEntry entryForHash(Segment seg, int h) { 730 /* 根据hash值获取当前segment中,维护的HashEntry数组中的某一条链的头结点 */ 731 HashEntry[] tab; 732 return (HashEntry) ((seg == null || (tab = seg.table) == null) ? null 733 : UNSAFE.getObjectVolatile(seg, (((tab.length - 1) & h) << TSHIFT) + TBASE)); 734 } 735 736 /* ---------------- Public operations -------------- */ 737 738 /** 739 * Creates a new, empty map with the specified initial capacity, load factor and 740 * concurrency level. 741 * 742 * @param initialCapacity the initial capacity. The implementation performs 743 * internal sizing to accommodate this many elements. 744 * @param loadFactor the load factor threshold, used to control resizing. 745 * Resizing may be performed when the average number of 746 * elements per bin exceeds this threshold. 747 * @param concurrencyLevel the estimated number of concurrently updating 748 * threads. The implementation performs internal sizing 749 * to try to accommodate this many threads. 750 * @throws IllegalArgumentException if the initial capacity is negative or the 751 * load factor or concurrencyLevel are 752 * nonpositive. 753 */ 754 @SuppressWarnings("unchecked") 755 public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { 756 if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel ) ,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,> 757 throw new IllegalArgumentException(); 758 if (concurrencyLevel > MAX_SEGMENTS) 759 concurrencyLevel = MAX_SEGMENTS; 760 // Find power-of-two sizes best matching arguments 761 int sshift = 0; 762 int ssize = 1; 763 while (ssize < concurrencyLevel) { 764 ++sshift; 765 ssize <; 766 } 767 this.segmentShift = 32 - sshift; 768 this.segmentMask = ssize - 1; 769 if (initialCapacity > MAXIMUM_CAPACITY) 770 initialCapacity = MAXIMUM_CAPACITY; 771 int c = initialCapacity / ssize; 772 if (c * ssize < initialCapacity) 773 ++c; 774 int cap = MIN_SEGMENT_TABLE_CAPACITY; 775 while (cap < c) 776 cap <; 777 // create segments and segments[0] 778 Segment s0 = new Segment(loadFactor, (int) (cap * loadFactor), 779 (HashEntry[]) new HashEntry[cap]); 780 Segment[] ss = (Segment[]) new Segment[ssize]; 781 UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0] 782 this.segments = ss; 783 } 784 785 /** 786 * Creates a new, empty map with the specified initial capacity and load factor 787 * and with the default concurrencyLevel (16). 788 * 789 * @param initialCapacity The implementation performs internal sizing to 790 * accommodate this many elements. 791 * @param loadFactor the load factor threshold, used to control resizing. 792 * Resizing may be performed when the average number of 793 * elements per bin exceeds this threshold. 794 * @throws IllegalArgumentException if the initial capacity of elements is 795 * negative or the load factor is nonpositive 796 * 797 * @since 1.6 798 */ 799 public ConcurrentHashMap(int initialCapacity, float loadFactor) { 800 this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL); 801 } 802 803 /** 804 * Creates a new, empty map with the specified initial capacity, and with 805 * default load factor (0.75) and concurrencyLevel (16). 806 * 807 * @param initialCapacity the initial capacity. The implementation performs 808 * internal sizing to accommodate this many elements. 809 * @throws IllegalArgumentException if the initial capacity of elements is 810 * negative. 811 */ 812 public ConcurrentHashMap(int initialCapacity) { 813 this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL); 814 } 815 816 /** 817 * Creates a new, empty map with a default initial capacity (16), load factor 818 * (0.75) and concurrencyLevel (16). 819 */ 820 public ConcurrentHashMap() { 821 this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL); 822 } 823 824 /** 825 * Creates a new map with the same mappings as the given map. The map is created 826 * with a capacity of 1.5 times the number of mappings in the given map or 16 827 * (whichever is greater), and a default load factor (0.75) and concurrencyLevel 828 * (16). 829 * 830 * @param m the map 831 */ 832 public ConcurrentHashMap(Map extends K, ? extends V> m) { 833 this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1, DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR, 834 DEFAULT_CONCURRENCY_LEVEL); 835 putAll(m); 836 } 837 838 /** 839 * Returns true if this map contains no key-value mappings. 840 * 841 * @return true if this map contains no key-value mappings 842 */ 843 public boolean isEmpty() { 844 /* 845 * Sum per-segment modCounts to avoid mis-reporting when elements are 846 * concurrently added and removed in one segment while checking another, in 847 * which case the table was never actually empty at any point. (The sum ensures 848 * accuracy up through at least 1< 849 * Methods size() and containsValue() use similar constructions for stability 850 * checks. 851 */ 852 /* 853 * 总体思路是遍历两次segment[]数组,不加锁操作 第一次遍历, 854 * 在遍历的过程中有任何一个segment中元素数量不为空,就立即返回false,否则就累加每个segment中写操作的次数modCount; 855 * 第一次遍历结束,如果累加的写操作的次数为0,直接返回true,说明segment[]数组中没有任何元素,否则再进行第二次遍历,任何一个segment不为空 856 * ,则返回false, 否则就进行第一次计算的累计写操作次数sum的减法操作,直到遍历完所有的segment,遍历结束之后,sum不等于0就说明, 857 * 在这两次遍历的过程中发生了一次写操作,所以必定不为空。 858 */ 859 long sum = 0L; 860 final Segment[] segments = this.segments; 861 for (int i = 0; i < segments.length; i++) { 862 Segment seg = segmentAt(segments, i); 863 if (seg != null) { 864 if (seg.count != 0) { 865 return false; 866 } 867 sum += seg.modCount; 868 } 869 } 870 871 if (sum != 0L) { 872 for (int i = 0; i < segments.length; i++) { 873 Segment seg = segmentAt(segments, i); 874 if (seg != null) { 875 if (seg.count != 0) { 876 return false; 877 } 878 sum -= seg.modCount; 879 } 880 } 881 882 if (sum != 0) { 883 return false; 884 } 885 } 886 return true; 887 } 888 889 /** 890 * Returns the number of key-value mappings in this map. If the map contains 891 * more than Integer.MAX_VALUE elements, returns 892 * Integer.MAX_VALUE. 893 * 894 * @return the number of key-value mappings in this map 895 */ 896 public int size() { 897 /* 统计size的过程中可以不同的segment中有并发的写操作,所以只能相对的统计某一个时间范围内的size大小 */ 898 final Segment[] segments = this.segments; 899 int size;// 计算的大小 900 long sum;// 本轮计算的累计并发次数 901 long last = 0L;// 上一次计算的累计并发次数 902 int retries = -1;// 初始重试次数 903 boolean overflow;// size是否溢出 904 try { 905 /* 906 * 总体思路:不加锁先尝试计算size,最大重试3次,要是在这个最大重试次数的范围内,segment[]中没有发生并发写操作,则结束, 907 * 否则对所有的segment进行加锁再统计size 908 */ 909 for (;;) { 910 /* 重试次数达到了阈值,则对所有的段进行加锁计算 */ 911 if (retries++ == RETRIES_BEFORE_LOCK) { 912 for (int i = 0; i < segments.length; i++) { 913 segmentAt(segments, i).lock(); 914 } 915 } 916 917 sum = 0L; 918 size = 0; 919 overflow = false; 920 for (int i = 0; i < segments.length; i++) { 921 Segment seg = segmentAt(segments, i); 922 if (seg != null) { 923 sum += seg.modCount; 924 int c = seg.count; 925 /* 发生了长度溢出 */ 926 if (c < 0 || (size += c) < 0) { 927 overflow = true; 928 } 929 } 930 } 931 932 /* 两次并发修改的次数一样说明这段时间的size是稳定的,则统计size结束,否则继续重试,达到阈值,仍不稳定,则对所有segment加锁,然后再计算 */ 933 if (sum == last) { 934 break; 935 } 936 last = sum; 937 } 938 } finally { 939 if (retries > RETRIES_BEFORE_LOCK) { 940 for (int i = 0; i < segments.length; i++) { 941 segmentAt(segments, i).unlock(); 942 } 943 } 944 } 945 return overflow ? Integer.MAX_VALUE : size; 946 } 947 948 /** 949 * Returns the value to which the specified key is mapped, or {@code null} if 950 * this map contains no mapping for the key. 951 * 952 *

953 * More formally, if this map contains a mapping from a key {@code k} to a value 954 * {@code v} such that {@code key.equals(k)}, then this method returns 955 * {@code v}; otherwise it returns {@code null}. (There can be at most one such 956 * mapping.) 957 * 958 * @throws NullPointerException if the specified key is null 959 */ 960 @SuppressWarnings("unchecked") 961 public V get(Object key) { 962 Segment seg; 963 HashEntry[] tab; 964 int h = hash(key); 965 /* 根据key定位到其所在的segment[]数组中下标,在内存中所对应的地址 */ 966 long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; 967 if ((seg = (Segment) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = seg.table) != null) { 968 for (HashEntry e = (HashEntry) UNSAFE.getObjectVolatile(seg, 969 (((tab.length - 1) & h) << TSHIFT) + TBASE); e != null; e = e.next) { 970 K k; 971 if ((k = e.key) == key || (e.hash == h && key.equals(k))) { 972 return e.value; 973 } 974 } 975 } 976 return null; 977 } 978 979 /** 980 * Tests if the specified object is a key in this table. 981 * 982 * @param key possible key 983 * @return true if and only if the specified object is a key in this 984 * table, as determined by the equals method; false 985 * otherwise. 986 * @throws NullPointerException if the specified key is null 987 */ 988 @SuppressWarnings("unchecked") 989 public boolean containsKey(Object key) { 990 Segment seg; 991 HashEntry[] tab; 992 int h = hash(key); 993 long u = (((h >> segmentShift) & segmentMask) << SSHIFT) + SBASE; 994 if ((seg = (Segment) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = seg.table) != null) { 995 long ut = (((tab.length - 1) & h) << TSHIFT) + TBASE; 996 for (HashEntry e = (HashEntry) UNSAFE.getObjectVolatile(tab, ut); e != null; e = e.next) { 997 K k; 998 if ((k = e.key) == key || (e.hash == h && key.equals(k))) { 999 return true; 1000 } 1001 } 1002 } 1003 1004 return false; 1005 } 1006 1007 /** 1008 * Returns true if this map maps one or more keys to the specified 1009 * value. Note: This method requires a full internal traversal of the hash 1010 * table, and so is much slower than method containsKey. 1011 * 1012 * @param value value whose presence in this map is to be tested 1013 * @return true if this map maps one or more keys to the specified 1014 * value 1015 * @throws NullPointerException if the specified value is null 1016 */ 1017 public boolean containsValue(Object value) { 1018 /* 总体和计算size方法思路一致,参见size注释 */ 1019 if (value == null) { 1020 throw new NullPointerException(); 1021 } 1022 1023 final Segment[] segments = this.segments; 1024 boolean found = false; 1025 long last = 0; 1026 int retries = -1; 1027 try { 1028 /* 找到了直接跳出到这里 */ 1029 outer: for (;;) { 1030 if (retries++ == RETRIES_BEFORE_LOCK) { 1031 for (int i = 0; i < segments.length; i++) { 1032 segmentAt(segments, i).lock(); 1033 } 1034 } 1035 1036 /* 用来统计并发的操作次数 */ 1037 long sum = 0L; 1038 for (int i = 0; i < segments.length; i++) { 1039 Segment seg = segmentAt(segments, i); 1040 HashEntry[] tab; 1041 if (seg != null && (tab = seg.table) != null) { 1042 for (int j = 0; j < tab.length; j++) { 1043 HashEntry e; 1044 for (e = entryAt(tab, j); e != null; e = e.next) { 1045 V v = e.value; 1046 if (v != null && value.equals(v)) { 1047 found = true; 1048 /* 找到了匹配的value,则跳出到最外层的循环 */ 1049 break outer; 1050 } 1051 } 1052 } 1053 } 1054 sum += seg.modCount; 1055 } 1056 1057 if (retries > 0 && sum == last) { 1058 break; 1059 } 1060 last = sum; 1061 } 1062 } finally { 1063 if (retries > RETRIES_BEFORE_LOCK) { 1064 for (int i = 0; i < segments.length; i++) { 1065 segmentAt(segments, i).unlock(); 1066 } 1067 } 1068 } 1069 return found; 1070 } 1071 1072 /** 1073 * Legacy method testing if some key maps into the specified value in this 1074 * table. This method is identical in functionality to {@link #containsValue}, 1075 * and exists solely to ensure full compatibility with class 1076 * {@link java.util.Hashtable}, which supported this method prior to 1077 * introduction of the Java Collections framework. 1078 * 1079 * @param value a value to search for 1080 * @return true if and only if some key maps to the value 1081 * argument in this table as determined by the equals method; 1082 * false otherwise 1083 * @throws NullPointerException if the specified value is null 1084 */ 1085 public boolean contains(Object value) { 1086 return containsValue(value); 1087 } 1088 1089 /** 1090 * Maps the specified key to the specified value in this table. Neither the key 1091 * nor the value can be null. 1092 * 1093 *

1094 * The value can be retrieved by calling the get method with a key that 1095 * is equal to the original key. 1096 * 1097 * @param key key with which the specified value is to be associated 1098 * @param value value to be associated with the specified key 1099 * @return the previous value associated with key, or null if 1100 * there was no mapping for key 1101 * @throws NullPointerException if the specified key or value is null 1102 */ 1103 @SuppressWarnings("unchecked") 1104 public V put(K key, V value) { 1105 Segment s; 1106 if (value == null) { 1107 throw new NullPointerException(); 1108 } 1109 int hash = hash(key); 1110 int j = (hash >>> segmentShift) & segmentMask; 1111 /* 如果当前的segment为空,则创建一个 */ 1112 if ((s = (Segment) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null) { 1113 s = ensureSegment(j); 1114 } 1115 return s.put(key, hash, value, false); 1116 } 1117 1118 /** 1119 * {@inheritDoc} 1120 * 1121 * @return the previous value associated with the specified key, or 1122 * null if there was no mapping for the key 1123 * @throws NullPointerException if the specified key or value is null 1124 */ 1125 @SuppressWarnings("unchecked") 1126 public V putIfAbsent(K key, V value) { 1127 Segment s; 1128 if (value == null) 1129 throw new NullPointerException(); 1130 int hash = hash(key); 1131 int j = (hash >>> segmentShift) & segmentMask; 1132 if ((s = (Segment) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null) 1133 s = ensureSegment(j); 1134 return s.put(key, hash, value, true); 1135 } 1136 1137 /** 1138 * Copies all of the mappings from the specified map to this one. These mappings 1139 * replace any mappings that this map had for any of the keys currently in the 1140 * specified map. 1141 * 1142 * @param m mappings to be stored in this map 1143 */ 1144 public void putAll(Map extends,>,>,>,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
K, ? extends V> m) { 1145 for (Map.Entry extends K, ? extends V> e : m.entrySet()) 1146 put(e.getKey(), e.getValue()); 1147 } 1148 1149 /** 1150 * Removes the key (and its corresponding value) from this map. This method does 1151 * nothing if the key is not in the map. 1152 * 1153 * @param key the key that needs to be removed 1154 * @return the previous value associated with key, or null if 1155 * there was no mapping for key 1156 * @throws NullPointerException if the specified key is null 1157 */ 1158 public V remove(Object key) { 1159 int hash = hash(key); 1160 Segment s = segmentForHash(hash); 1161 return s == null ? null : s.remove(key, hash, null); 1162 } 1163 1164 /** 1165 * {@inheritDoc} 1166 * 1167 * @throws NullPointerException if the specified key is null 1168 */ 1169 public boolean remove(Object key, Object value) { 1170 int hash = hash(key); 1171 Segment s; 1172 return value != null && (s = segmentForHash(hash)) != null && s.remove(key, hash, value) != null; 1173 } 1174 1175 /** 1176 * {@inheritDoc} 1177 * 1178 * @throws NullPointerException if any of the arguments are null 1179 */ 1180 public boolean replace(K key, V oldValue, V newValue) { 1181 int hash = hash(key); 1182 if (oldValue == null || newValue == null) 1183 throw new NullPointerException(); 1184 Segment s = segmentForHash(hash); 1185 return s != null && s.replace(key, hash, oldValue, newValue); 1186 } 1187 1188 /** 1189 * {@inheritDoc} 1190 * 1191 * @return the previous value associated with the specified key, or 1192 * null if there was no mapping for the key 1193 * @throws NullPointerException if the specified key or value is null 1194 */ 1195 public V replace(K key, V value) { 1196 int hash = hash(key); 1197 if (value == null) { 1198 throw new NullPointerException(); 1199 } 1200 Segment seg = segmentForHash(hash); 1201 return seg == null ? null : seg.replace(key, hash, value); 1202 } 1203 1204 /** 1205 * Removes all of the mappings from this map. 1206 */ 1207 public void clear() { 1208 final Segment[] segments = this.segments; 1209 for (int j = 0; j < segments.length; ++j) { 1210 Segment s = segmentAt(segments, j); 1211 if (s != null) 1212 s.clear(); 1213 } 1214 } 1215 1216 /** 1217 * Returns a {@link Set} view of the keys contained in this map. The set is 1218 * backed by the map, so changes to the map are reflected in the set, and 1219 * vice-versa. The set supports element removal, which removes the corresponding 1220 * mapping from this map, via the Iterator.remove, Set.remove, 1221 * removeAll, retainAll, and clear operations. It 1222 * does not support the add or addAll operations. 1223 * 1224 *

1225 * The view's iterator is a "weakly consistent" iterator that will 1226 * never throw {@link ConcurrentModificationException}, and guarantees to 1227 * traverse elements as they existed upon construction of the iterator, and may 1228 * (but is not guaranteed to) reflect any modifications subsequent to 1229 * construction. 1230 */ 1231 public Set keySet() { 1232 Set ks = keySet; 1233 return (ks != null) ? ks : (keySet = new KeySet()); 1234 } 1235 1236 /** 1237 * Returns a {@link Collection} view of the values contained in this map. The 1238 * collection is backed by the map, so changes to the map are reflected in the 1239 * collection, and vice-versa. The collection supports element removal, which 1240 * removes the corresponding mapping from this map, via the 1241 * Iterator.remove, Collection.remove, removeAll, 1242 * retainAll, and clear operations. It does not support the 1243 * add or addAll operations. 1244 * 1245 *

1246 * The view's iterator is a "weakly consistent" iterator that will 1247 * never throw {@link ConcurrentModificationException}, and guarantees to 1248 * traverse elements as they existed upon construction of the iterator, and may 1249 * (but is not guaranteed to) reflect any modifications subsequent to 1250 * construction. 1251 */ 1252 public Collection values() { 1253 Collection vs = values; 1254 return (vs != null) ? vs : (values = new Values()); 1255 } 1256 1257 /** 1258 * Returns a {@link Set} view of the mappings contained in this map. The set is 1259 * backed by the map, so changes to the map are reflected in the set, and 1260 * vice-versa. The set supports element removal, which removes the corresponding 1261 * mapping from the map, via the Iterator.remove, Set.remove, 1262 * removeAll, retainAll, and clear operations. It 1263 * does not support the add or addAll operations. 1264 * 1265 *

1266 * The view's iterator is a "weakly consistent" iterator that will 1267 * never throw {@link ConcurrentModificationException}, and guarantees to 1268 * traverse elements as they existed upon construction of the iterator, and may 1269 * (but is not guaranteed to) reflect any modifications subsequent to 1270 * construction. 1271 */ 1272 public Set> entrySet() { 1273 Set> es = entrySet; 1274 return (es != null) ? es : (entrySet = new EntrySet()); 1275 } 1276 1277 /** 1278 * Returns an enumeration of the keys in this table. 1279 * 1280 * @return an enumeration of the keys in this table 1281 * @see #keySet() 1282 */ 1283 public Enumeration keys() { 1284 return new KeyIterator(); 1285 } 1286 1287 /** 1288 * Returns an enumeration of the values in this table. 1289 * 1290 * @return an enumeration of the values in this table 1291 * @see #values() 1292 */ 1293 public Enumeration elements() { 1294 return new ValueIterator(); 1295 } 1296 1297 /* ---------------- Iterator Support -------------- */ 1298 1299 abstract class HashIterator { 1300 int nextSegmentIndex; 1301 int nextTableIndex; 1302 HashEntry[] currentTable; 1303 HashEntry nextEntry; 1304 HashEntry lastReturned; 1305 1306 HashIterator() { 1307 nextSegmentIndex = segments.length - 1; 1308 nextTableIndex = -1; 1309 advance(); 1310 } 1311 1312 /** 1313 * Set nextEntry to first node of next non-empty table (in backwards order, to 1314 * simplify checks). 1315 */ 1316 final void advance() { 1317 for (;;) { 1318 if (nextTableIndex >= 0) { 1319 if ((nextEntry = entryAt(currentTable, nextTableIndex--)) != null) 1320 break; 1321 } else if (nextSegmentIndex >= 0) { 1322 Segment seg = segmentAt(segments, nextSegmentIndex--); 1323 if (seg != null && (currentTable = seg.table) != null) 1324 nextTableIndex = currentTable.length - 1; 1325 } else 1326 break; 1327 } 1328 } 1329 1330 final HashEntry nextEntry() { 1331 HashEntry e = nextEntry; 1332 if (e == null) 1333 throw new NoSuchElementException(); 1334 lastReturned = e; // cannot assign until after null check 1335 if ((nextEntry = e.next) == null) 1336 advance(); 1337 return e; 1338 } 1339 1340 public final boolean hasNext() { 1341 return nextEntry != null; 1342 } 1343 1344 public final boolean hasMoreElements() { 1345 return nextEntry != null; 1346 } 1347 1348 public final void remove() { 1349 if (lastReturned == null) 1350 throw new IllegalStateException(); 1351 ConcurrentHashMap.this.remove(lastReturned.key); 1352 lastReturned = null; 1353 } 1354 } 1355 1356 final class KeyIterator extends HashIterator implements Iterator, Enumeration { 1357 public final K next() { 1358 return super.nextEntry().key; 1359 } 1360 1361 public final K nextElement() { 1362 return super.nextEntry().key; 1363 } 1364 } 1365 1366 final class ValueIterator extends HashIterator implements Iterator, Enumeration { 1367 public final V next() { 1368 return super.nextEntry().value; 1369 } 1370 1371 public final V nextElement() { 1372 return super.nextEntry().value; 1373 } 1374 } 1375 1376 /** 1377 * Custom Entry class used by EntryIterator.next(), that relays setValue changes 1378 * to the underlying map. 1379 */ 1380 final class WriteThroughEntry extends AbstractMap.SimpleEntry { 1381 WriteThroughEntry(K k, V v) { 1382 super(k, v); 1383 } 1384 1385 /** 1386 * Set our entry's value and write through to the map. The value to return is 1387 * somewhat arbitrary here. Since a WriteThroughEntry does not necessarily track 1388 * asynchronous changes, the most recent "previous" value could be different 1389 * from what we return (or could even have been removed in which case the put 1390 * will re-establish). We do not and cannot guarantee more. 1391 */ 1392 public V setValue(V value) { 1393 if (value == null) 1394 throw new NullPointerException(); 1395 V v = super.setValue(value); 1396 ConcurrentHashMap.this.put(getKey(), value); 1397 return v; 1398 } 1399 } 1400 1401 final class EntryIterator extends HashIterator implements Iterator> { 1402 public Map.Entry next() { 1403 HashEntry e = super.nextEntry(); 1404 return new WriteThroughEntry(e.key, e.value); 1405 } 1406 } 1407 1408 final class KeySet extends AbstractSet { 1409 public Iterator iterator() { 1410 return new KeyIterator(); 1411 } 1412 1413 public int size() { 1414 return ConcurrentHashMap.this.size(); 1415 } 1416 1417 public boolean isEmpty() { 1418 return ConcurrentHashMap.this.isEmpty(); 1419 } 1420 1421 public boolean contains(Object o) { 1422 return ConcurrentHashMap.this.containsKey(o); 1423 } 1424 1425 public boolean remove(Object o) { 1426 return ConcurrentHashMap.this.remove(o) != null; 1427 } 1428 1429 public void clear() { 1430 ConcurrentHashMap.this.clear(); 1431 } 1432 } 1433 1434 final class Values extends AbstractCollection { 1435 public Iterator iterator() { 1436 return new ValueIterator(); 1437 } 1438 1439 public int size() { 1440 return ConcurrentHashMap.this.size(); 1441 } 1442 1443 public boolean isEmpty() { 1444 return ConcurrentHashMap.this.isEmpty(); 1445 } 1446 1447 public boolean contains(Object o) { 1448 return ConcurrentHashMap.this.containsValue(o); 1449 } 1450 1451 public void clear() { 1452 ConcurrentHashMap.this.clear(); 1453 } 1454 } 1455 1456 final class EntrySet extends AbstractSet> { 1457 public Iterator> iterator() { 1458 return new EntryIterator(); 1459 } 1460 1461 public boolean contains(Object o) { 1462 if (!(o instanceof Map.Entry)) 1463 return false; 1464 Map.Entry, ?> e = (Map.Entry, ?>) o; 1465 V v = ConcurrentHashMap.this.get(e.getKey()); 1466 return v != null && v.equals(e.getValue()); 1467 } 1468 1469 public boolean remove(Object o) { 1470 if (!(o instanceof Map.Entry)) 1471 return false; 1472 Map.Entry, ?> e = (Map.Entry, ?>) o; 1473 return ConcurrentHashMap.this.remove(e.getKey(), e.getValue()); 1474 } 1475 1476 public int size() { 1477 return ConcurrentHashMap.this.size(); 1478 } 1479 1480 public boolean isEmpty() { 1481 return ConcurrentHashMap.this.isEmpty(); 1482 } 1483 1484 public void clear() { 1485 ConcurrentHashMap.this.clear(); 1486 } 1487 } 1488 1489 /* ---------------- Serialization Support -------------- */ 1490 1491 /** 1492 * Save the state of the ConcurrentHashMap instance to a stream (i.e., 1493 * serialize it). 1494 * 1495 * @param s the stream 1496 * @serialData the key (Object) and value (Object) for each key-value mapping, 1497 * followed by a null pair. The key-value mappings are emitted in no 1498 * particular order. 1499 */ 1500 private void writeObject(java.io.ObjectOutputStream s) throws IOException { 1501 // force all segments for serialization compatibility 1502 for (int k = 0; k < segments.length; ++k) 1503 ensureSegment(k); 1504 s.defaultWriteObject(); 1505 1506 final Segment[] segments = this.segments; 1507 for (int k = 0; k < segments.length; ++k) { 1508 Segment seg = segmentAt(segments, k); 1509 seg.lock(); 1510 try { 1511 HashEntry[] tab = seg.table; 1512 for (int i = 0; i < tab.length; ++i) { 1513 HashEntry e; 1514 for (e = entryAt(tab, i); e != null; e = e.next) { 1515 s.writeObject(e.key); 1516 s.writeObject(e.value); 1517 } 1518 } 1519 } finally { 1520 seg.unlock(); 1521 } 1522 } 1523 s.writeObject(null); 1524 s.writeObject(null); 1525 } 1526 1527 /** 1528 * Reconstitute the ConcurrentHashMap instance from a stream (i.e., 1529 * deserialize it). 1530 * 1531 * @param s the stream 1532 */ 1533 @SuppressWarnings("unchecked") 1534 private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { 1535 s.defaultReadObject(); 1536 1537 // set hashMask 1538 UNSAFE.putIntVolatile(this, HASHSEED_OFFSET, randomHashSeed(this)); 1539 1540 // Re-initialize segments to be minimally sized, and let grow. 1541 int cap = MIN_SEGMENT_TABLE_CAPACITY; 1542 final Segment[] segments = this.segments; 1543 for (int k = 0; k < segments.length; ++k) { 1544 Segment seg = segments[k]; 1545 if (seg != null) { 1546 seg.threshold = (int) (cap * seg.loadFactor); 1547 seg.table = (HashEntry[]) new HashEntry[cap]; 1548 } 1549 } 1550 1551 // Read the keys and values, and put the mappings in the table 1552 for (;;) { 1553 K key = (K) s.readObject(); 1554 V value = (V) s.readObject(); 1555 if (key == null) 1556 break; 1557 put(key, value); 1558 } 1559 } 1560 1561 // Unsafe mechanics 1562 private static final sun.misc.Unsafe UNSAFE; 1563 private static final long SBASE; 1564 private static final int SSHIFT; 1565 private static final long TBASE; 1566 private static final int TSHIFT; 1567 private static final long HASHSEED_OFFSET; 1568 1569 static { 1570 int ss, ts; 1571 try { 1572 UNSAFE = sun.misc.Unsafe.getUnsafe(); 1573 Class tc = HashEntry[].class; 1574 Class sc = Segment[].class; 1575 /* 获取数组中第0个元素在数组对象中的偏移量 */ 1576 TBASE = UNSAFE.arrayBaseOffset(tc); 1577 SBASE = UNSAFE.arrayBaseOffset(sc); 1578 /* 获取数组中每个元素的长度大小 */ 1579 ts = UNSAFE.arrayIndexScale(tc); 1580 ss = UNSAFE.arrayIndexScale(sc); 1581 HASHSEED_OFFSET = UNSAFE.objectFieldOffset(ConcurrentHashMap.class.getDeclaredField("hashSeed")); 1582 } catch (Exception e) { 1583 throw new Error(e); 1584 } 1585 if ((ss & (ss - 1)) != 0 || (ts & (ts - 1)) != 0) 1586 throw new Error("data type scale not a power of two"); 1587 /* 数组中元素的大小用2的幂次表示,如ss =16,SSHIFT = 4; */ 1588 SSHIFT = 31 - Integer.numberOfLeadingZeros(ss); 1589 TSHIFT = 31 - Integer.numberOfLeadingZeros(ts); 1590 } 1591 1592 },>,>,>,>,>,>,>,>,>,>,>,>,>,>,>,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>
,>

Original: https://www.cnblogs.com/wha6239/p/16419371.html
Author: 一只烤鸭朝北走
Title: ConcurrentHashMap深入剖析(基于JDK1.7)

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/611274/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

  • 一文了解Cookie

    Cookie 什么是 Cookie? 先要了解HTTP是 无状态的Web服务器,什么是无状态呢?一次对话完成后下一次对话完全不知道上一次对话发生了什么。如果在Web服务器中只是用来…

    数据库 2023年6月11日
    0102
  • Python–线程

    进程与线程的区别: 线程是程序执行的最小单位,而进程是操作系统分配资源的最小单位; 一个进程由一个或多个线程组成,线程是一个进程中代码的不同执行路线; 进程之间相互独立,但同一进程…

    数据库 2023年6月9日
    062
  • PDF转换OFD(Java实用版)

    前言: 在项目中用到了,就写一下哈 OFD简介 百度百科:https://baike.baidu.com/item/OFD/56227163?fr=aladdin OFD(Open…

    数据库 2023年6月16日
    0135
  • spring上传文件

    本文将说明spring上传文件如何配置,以及从request请求中解析到文件流的原理 #添加依赖 主要用来解析request请求流,获取文件字段名、上传文件名、content-ty…

    数据库 2023年6月16日
    078
  • zabbix模板,角色,用户,权限管理

    用户管理 用户组 用户角色 用户 模板管理 模板组 模板 posted @2022-09-07 22:22 溜溜威 阅读(16 ) 评论() 编辑 Original: https:…

    数据库 2023年6月14日
    0107
  • 2022-8-29 javaweb 第一天 servlet/tomcat

    软件架构 1、C/S架构:客户端 / 服务器——–QQ,Typora,腾讯会议。 2、B/S架构:浏览器 / 服务器——…

    数据库 2023年6月14日
    085
  • MySql Explain字段解析

    MySql Explain字段解析 id id列表示select的序号,查询Sql中有几个select就会有几个id。 id的值越大,该查询的优先级超高。 select_type …

    数据库 2023年5月24日
    091
  • SpringBoot操作Oracle

    /* Navicat Premium Data Transfer Source Server : 本地Oracle Source Server Type : Oracle Sour…

    数据库 2023年6月14日
    0101
  • InnoDB 中不同SQL语句设置的锁

    锁定读、UPDATE 或 DELETE 通常会给在SQL语句处理过程扫描到的每个索引记录上设置记录锁。语句中是否存在排除该行的WHERE条件并不重要。InnoDB不记得确切的WHE…

    数据库 2023年5月24日
    082
  • 翻译|是否应该在 Kubernetes 上运行数据库?

    数据库如何在 Kubernetes 上运行?如果可以,哪些类型的数据库和数据最适合使用 K8s?让我们一起来看看。 Kubernetes 是用于自动部署、扩展和管理容器化应用程序的…

    数据库 2023年5月24日
    090
  • 携程二面:讲讲 MySQL 中的 WAL 策略和 CheckPoint 技术

    前段时间我在准备暑期实习嘛,这是当时面携程的时候二面的一道问题,我一脸懵逼,赶紧道歉,不好意思不知道没了解过,面试官又解释说 redo log,我寻思着 redo log 我知道啊…

    数据库 2023年6月6日
    0260
  • django-ckeditor配置html5video上传视频

    参考信息 为Django ckeditor配置上传视频:https://www.byincd.com/bobjiang/article-01128/ 使用 1. 手动下载插件 ht…

    数据库 2023年6月9日
    093
  • docker的相关命令

    docker的相关命令 1.安装docker: (1)yum -y install docker ​ sudo sh get-docker.sh 2.从远程拉取应用的镜像源: do…

    数据库 2023年6月16日
    076
  • ATM系统开发(Java版)

    ATM系统Java模拟开发总结 ATM系统开发 技术点分析 1.面向对象编程 每个用户的账户都是一个对象,所以需要设计账户类Accent用于创建账户对象封装账户信息。 2.使用集合…

    数据库 2023年6月16日
    073
  • Jenkins初始化界面一直显示Please wait while Jenkins is getting ready to work …

    第一次访问 jenkins时,会提示如下界面: 注:如果这个界面初始化的时间过长,则需要修改相关配置文件。 原因:因为访问官网太慢。我们只需要换一个源,不使用官网的源即可。 1、找…

    数据库 2023年6月14日
    072
  • flowable 部署流程定义(从Classpath) 和 (根据ui.modeler的 modelId部署)

    /**部署流程定义(根据ui.modeler的 modelId部署) * @param modelId 模型ID * @from fhadmin.cn */ protected S…

    数据库 2023年6月6日
    0116
亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球