This post is the first in a series on concurrency and describes the benefits and some of the problems with concurrency we might face in our code and simple ways we can fix them. While there are far more complex problems that require more advanced solutions they all share the same principles and in many cases, are rooted in these basic solutions.

This post assumes you have some familiarity with concurrency, threading and issues of state. For those that do, you can skip to the next section, for those that don’t or aren’t sure, here’s a very brief refresher:


The ability to run blocks of code at the same time, usually on a processor that has multiple cores for executing the code. How the concurrent code is executed on hardware and how many cores it is run on relates to parallelism.


A thread encapsulates the multiple execution paths through application code. Each thread has its own stack and executes the same code as other threads and can also access shared data.


An atomic operation is one that is guaranteed to complete fully without interruption from another thread.


Mutable indicates that something can change. A class will be mutable if its state usually represented internally by field values can change. This usually takes place through either internal code that modifies values, or external mutator functions (i.e. setters on a JavaBean)


Immutable state cannot change. In terms of class instances, once the fields have been set, usually on construction, then the state is forever fixed. The values are guaranteed to be the same for the life of the instance.

Thread Safe

Code that has the quality of not being affected by the number of threads that might be executing the code at the same time.

There are many reasons for running code in parallel and there are many benefits to using a programming model that takes concurrency into account. No, you don’t have to write all your code as though it will be running in its own thread, you just have to write the code so it will work if it does end up running in a separate thread.

Performance is usually the often cited reason for concurrency. Running some code N times on one thread vs. running it N/8 times on 8 cores makes some obvious performance increases available to us. Moore’s Law is reaching its limits and the next step for performance increases is to do more with the processing power we already have. Concurrency can also provide a much more responsive application as work is delegated to worker threads asynchronously while the user interface running in the main thread stays lively. Imagine being unable to scroll a web page because it was still loading images. Inversely, concurrency is important for scalability on the server side for dealing with many requests.

By writing code with concurrency in mind I’ve found the code to be much more reliable and stable. This is mainly due to embracing immutability, but concurrent programs usually force you to write code in smaller blocks that have more clarity. Testing is much easier since we are often able to independently test these smaller, more concise nuggets of code. They can also improve the fault tolerance of the application since we usually end up with fewer dependencies on other code, and we can make fewer assumptions about the environment. Failed tasks can sometimes be ignored, maybe they can be re-scheduled to run again if needed (i.e loading a browser image). These tolerances for failure (or even delays during execution) are hard to implement in non-concurrent and synchronous systems. It introduces a kind of pets vs. cattle mentality for programmers. The main thread is a pet and is precious, but worker threads are like cattle and if they fail, we schedule a retry.

As a general principle, at least thinking about concurrency from the start introduces a lot of benefits and is a lot easier than introducing it later when it is really needed.

Thread Safety

When considering if something is thread safe and able to run concurrently alongside other threads we need to determine whether the operations in one thread are isolated from the operations of another thread. If not, there is a danger that one thread can be disrupted by the actions of another thread. I use the term disrupted here because it is often the case that one thread should be able to alter the behaviour of another thread as long as that altered behaviour is by design. The problem is when one thread is affected in unwanted or unexpected ways by another thread. The origin of such disruption likely comes from the state that is shared between threads.

The following definitions can be used to consider how thread safe something is:

  1. The perfect thread-safe situation would be code where nothing is shared between different threads. Each thread could run independently against its own data and not worry about interference from any other thread. An example of this might be a program that is handed data when created and the isolated thread operates on that data only. In these cases, we don’t have to write anything differently to handle concurrency. The scope of all state is local to the thread and cannot be disrupted by another.
  2. Next best would be a situation where multiple threads shared a read-only state. While the state is shared, it is immutable and since the values don’t change, no thread can interfere with another thread by changing the state. It is a variation on the first scenario except the data is shared, which is irrelevant since it is immutable. An example of this might be a ray tracer where threads are allocated different sections of the screen to render using a shared scene model. Again in these cases, we don’t have to implement them any differently since while the state is shared, it is immutable which is as good as being locally scoped to the thread.
  3. Finally, the most difficult scenario is where multiple threads access state that is mutable and can change at any point in time. To make this scenario work, we must ensure that the execution of one thread is not impacted by state changes from another thread. We can do this by making any state changes themselves thread safe and when we recognise that the state used by a thread is mutable, implement our threads with the expectation that state will change.

The first two scenarios are near identical except that when nothing is shared, we don’t care about immutability, but when any state is shared then there is some effort involved in ensuring that the state is immutable and stays immutable. There is the added risk that when an immutable class is later modified it can be made mutable without consideration for the requirement of immutability. As we’ll see later there are ways to mark classes as threadsafe, mutable or immutable.

In many cases, the best way to deal with the last scenario is to implement it such that it reduces the problem down to one of the first two scenarios. This typically involves considering how we implement both the state and the code executed on the thread. Making our state immutable would be ideal, but not always possible, at least directly. If we cannot have an immutable state then updates should be done atomically so as to reduce interference. At the same time, in the code executed in our threads, we should be aware that the state might be subject to change and handle it appropriately.

At the end of the day, mutability is inevitable somewhere in our code and we need to be able to handle mutability across different threads. Ideally, we do this by reducing the problem down to some common patterns that we have reliable solutions for.

The next post will cover the topic of mutability and immutability. What it is, what its good for and how its implemented. I also hope it will be a lot less wordier.