Contents
Java concurrency
The Java programming language and the Java virtual machine (JVM) is designed to support concurrent programming. All execution takes place in the context of threads. Objects and resources can be accessed by many separate threads. Each thread has its own path of execution, but can potentially access any object in the program. The programmer must ensure read and write access to objects is properly coordinated (or "synchronized") between threads. Thread synchronization ensures that objects are modified by only one thread at a time and prevents threads from accessing partially updated objects during modification by another thread. The Java language has built-in constructs to support this coordination.
Processes and threads
Most implementations of the Java virtual machine run as a single process. In the Java programming language, concurrent programming is primarily concerned with threads (also called lightweight processes). Multiple processes can only be realized with multiple JVMs.
Thread objects
Threads share the process' resources, including memory and open files. This makes for efficient, but potentially problematic, communication. Every application has at least one thread called the main thread. The main thread has the ability to create additional threads as **' or **' objects. The interface is similar to in that both are designed for classes whose instances are potentially executed by another thread. A, however, does not return a result and cannot throw a checked exception. Each thread can be scheduled on a different CPU core or use time-slicing on a single hardware processor, or time-slicing on many hardware processors. There is no general solution to how Java threads are mapped to native OS threads. Every JVM implementation can do this differently. Each thread is associated with an instance of the class. Threads can be managed either by directly using the objects, or indirectly by using abstract mechanisms such as s or s.
Starting a Thread
Two ways to start a Thread:
Provide a runnable object
Subclass thread
Interrupts
An interrupt tells a thread that it should stop what it is doing and do something else. A thread sends an interrupt by invoking on the object for the thread to be interrupted. The interrupt mechanism is implemented using an internal flag known as the "interrupted status". Invoking sets this flag. By convention, any method that exits by throwing an clears the interrupted status when it does so. However, it's always possible that the interrupted status will immediately be set again, by another thread invoking.
Joins
The method allows one to wait for the completion of another.
Exceptions
Uncaught exceptions thrown by code will terminate the thread. The main thread prints exceptions to the console, but user-created threads need a handler registered to do so.
Memory model
The Java memory model describes how threads in the Java programming language interact through memory. On modern platforms, code is frequently not executed in the order it was written. It is reordered by the compiler, the processor and the memory subsystem to achieve maximum performance. The Java programming language does not guarantee linearizability, or even sequential consistency, when reading or writing fields of shared objects, and this is to allow for compiler optimizations (such as register allocation, common subexpression elimination, and redundant read elimination) all of which work by reordering memory reads—writes.
Synchronization
Threads communicate primarily by sharing access to fields and the objects that reference fields refer to. This form of communication is extremely efficient, but makes two kinds of errors possible: thread interference and memory consistency errors. The tool needed to prevent these errors is synchronization. Reorderings can come into play in incorrectly synchronized multithreaded programs, where one thread is able to observe the effects of other threads, and may be able to detect that variable accesses become visible to other threads in a different order than executed or specified in the program. Most of the time, one thread doesn't care what the other is doing. But when it does, that's what synchronization is for. To synchronize threads, Java uses monitors, which are a high-level mechanism for allowing only one thread at a time to execute a region of code protected by the monitor. The behavior of monitors is explained in terms of locks; there is a lock associated with each object. Synchronization has several aspects. The most well-understood is mutual exclusion—only one thread can hold a monitor at once, so synchronizing on a monitor means that once one thread enters a synchronized block protected by a monitor, no other thread can enter a block protected by that monitor until the first thread exits the synchronized block. But there is more to synchronization than mutual exclusion. Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor. After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory. We will then be able to see all of the writes made visible by the previous release. Reads—writes to fields are linearizable if either the field is volatile, or the field is protected by a unique lock which is acquired by all readers and writers.
Locks and synchronized blocks
A thread can achieve mutual exclusion either by entering a synchronized block or method, which acquires an implicit lock, or by acquiring an explicit lock (such as the from the package ). Both approaches have the same implications for memory behavior. If all accesses to a particular field are protected by the same lock, then reads—writes to that field are linearizable (atomic).
Volatile fields
When** applied to a field, the Java **** **** keyword guarantees that: A **** **** fields are linearizable. Reading** a ******** ******** field is like acquiring a lock: the working memory is invalidated and the ******** ******** field's current value is reread from memory. Writing a ******** ******** field is like releasing a lock: the ******** ******** field is immediately written back to ****memory.
Final fields
A** field declared to be **** **** cannot be modified once it has been initialized. An object's **** **** fields are initialized in its constructor. As** long as the ******** ******** reference is not released from the constructor before the constructor returns, then the correct value of any ******** ******** fields will be visible to other threads without ****synchronization.
History
Since JDK 1.2, Java has included a standard set of collection classes, the Java collections framework Doug Lea, who also participated in the Java collections framework implementation, developed a concurrency package, comprising several concurrency primitives and a large battery of collection-related classes. This work was continued and updated as part of JSR 166 which was chaired by Doug Lea. JDK 5.0 incorporated many additions and clarifications to the Java concurrency model. The concurrency APIs developed by JSR 166 were also included as part of the JDK for the first time. JSR 133 provided support for well-defined atomic operations in a multithreaded/multiprocessor environment. Both the Java SE 6 and Java SE 7 releases introduced updated versions of the JSR 166 APIs as well as several new additional APIs.
This article is derived from Wikipedia and licensed under CC BY-SA 4.0. View the original article.
Wikipedia® is a registered trademark of the
Wikimedia Foundation, Inc.
Bliptext is not
affiliated with or endorsed by Wikipedia or the
Wikimedia Foundation.