0%

In the previous post we saw how JIT inlining works. We also saw how the JVM performs OSR to replace the interpreted version of the method to the compiled version on the fly. In this post we’ll dig even deeper and see the assembly code that is generated when the method gets compiled.

## Prerequisites

The flag which enables us to see assembly code is -XX:+PrintAssembly. However, viewing assembly code does not work out of the box. You’ll need to have the disassembler on your path. You’ll need to get hsdis (HotSpot Disassembler) and build it for your system. There’s a prebuilt version available for Mac and that’s the one I am going to use.

Once we have that, we’ll add it to LD_LIBRARY_PATH.

Now we’re all set to see how JVM generates assembly code.

## Printing assembly code

We’ll reuse the same inlining code from last time:

-XX:+PrintAssembly is a diagnostic flag so we’ll need to unlock JVM’s disgnostic options first. Here’s how:

This will generate a lot of assembly code. We will, however, look at the assembly code generated for inline1.

So this is the assembly code that we get when we run the program. It’s a lot to grok in one go so let’s break it down.

Line #7 and #8 are self explanatory; they show which method we’re looking at. Line #9 and #10 (and #13 to #17) are for thread synchronization. The JVM can get rid of thread synchronization if it sees that there is no need for it (lock eliding) but since we are using static methods here, it needs to add code for synchronization. It doesn’t know that we only have only one thread running.

Our actual program is on line #11 where we are moving the value 4 to %eax register. This is the register which holds, by convention, the return value for our methods. This shows that the JVM has optimized our code. Our call chain was inline1inline2inline3 and it was inline3 which returned 4. However, JVM is smart enough to see that these method calls are superfluous and decided to get rid of them. Very nifty!

Line #21 to #23 has code to handle exceptions. We know there won’t be any exceptions but the JVM doesn’t so it has to be prepared to deal with that.

And finally, there’s code to deoptimize. In addition to static optimizations, there are some optimizations that the JVM makes which are speculative. This means that the JVM generates assembly code expecting things to go a certain way after it has profiled the interpreted code. However, if the speculation is wrong, the JVM can go back to running the interpreted version.

## Which flags control compilation?

-XX:CompileThreshold is the flag which controls the number of call / branch invocations after which the JVM compiles bytecodes to assembly. You can use -XX:+PrintFlagsFinal to see the value. By default it is 10000.

Compiling a method to assembly depends on two factors: the number of times that method has been invoked (method entry counter) and the number of times a loop has been executed (back-edge counter). Once the sum of the two counters is above CompileThreshold, the method will be compiled to assembly.

Maintaining the two counters separately is very useful. If the back-edge counter alone exceeds the threshold, the JVM can compile just the loop (and not the entire method) to assembly. It will perform an OSR and start using the compiled version of the loop while the loop is executing instead of waiting for the next method invocation. When the method is invoked the next time around, it’ll use the compiled version of the code.

So since compiled code is better than interpreted code, and CompileThreshold controls when a method will be compiled to assembly, reducing the CompileThreshold would mean we have a lot more assembly code.

There is one advantage to reducing the CompileThreshold - it will reduce the time taken for the branches / methods to be deemed hot i.e. reduce the JVM warmup time.

In older JDKs, there was another reason to reduce CompileThreshold. The method entry and back-edge counters would decay at every safepoint. This would mean that some methods would not compile to assembly since the counters kept decaying. These are the “lukewarm” methods that never became hot. With JDK 8+, the counters no longer decay at safepoints so there won’t be any lukewarm methods.

In addition, JDK 8+ come with tiered compilation enabled and the CompileThreshold is ignored. The idea of there being a “compile threshold”, though, does not change. I’m defering the topic of tiered compilation for the sake of simplicity.

## Where is the compiled code stored?

The compiled code is stored in JVM’s code cache. As more methods become hot, the cache starts to get filled. Once the cache is filled, the JVM can no longer compile anything to assembly and will resort to purely interpreteting the bytecodes.

The size of code cache is platform dependent.

Also, JVM ensures that the access to cache is optimized. The hlt instructions in the assembly code exist for aligning the addresses. It is much more efficient for the CPU to read from even addresses than it is to read from odd addresses in memory. The hlt instructions ensure that the code is at an even address in memory.

## Which flags control code cache size?

There are two flags which are important in setting the code cache size - InitialCodeCacheSize and ReservedCodeCacheSize. The first flag indicates the code cache size the JVM will start with and the latter indicates the size to which the code cache can grow. With JDK 8+, ReservedCodeCacheSize is large enough so you don’t need to set it explicitly. On my machine it is 240 MB (5x what it is for Java 7, 48 MB).

## Conclusion

The JVM compiles hot code to assembly and stores it at even addresses in it’s code cache for faster access. Executing assembly code is much more efficient than interpreting the bytecodes. You don’t really need to look at the assembly code generated everyday but knowing what is generated as your code executes gives you an insight into what the JVM does to make your code run faster.

In the previous post we looked at how interpreted and compiled languages work. To recap, an interpreter works by generating assembly code for every bytecode it encounters. This is a very simple way to execute a program and also a very slow one. It ends up redoing a lot of translation from bytecode to assembly. Also, this simplistic approach means that the interpreter cannot do optimizations as it executes the bytecodes. Then there are compilers which produce assembly ahead-of-time. This overcomes having to generate assembly again and again but once the assembly is generated it cannot be changed on the fly.

JVM comes with both an interpreter and a compiler. When the execution of the code begins, the bytecodes are interpreted. For the sake of this series, I’ll be looking at Oracle HotSpot JVM which looks for “hot spots” in the code as the bytecodes get interpreted. These are the parts of the code which are most frequently executed and the performance of the application depends on these. Once the code is identified as “hot”, JVM can go from interpreting the code to compiling it to assembly i.e. the code is compiled “just-in-time”. In addition, since the code is being profiled as it is run, the compiled code is optimized.

In this post we’ll look at one such optimization: inlining.

## Inlining

Inlining is an optimization where the call to a method is replaced by the body of the called method i.e. at the call site, the caller and the callee are melded together. When a method is called, the JVM has to push a stack frame so that it can resume from where it left off after the called method has finished executing. Inlining improves performance since JVM will not have to push a stack frame.

I’ll start with a simple example to demonstrate how inlining works.

Next, let’s compile and run the code.

Output:

Line #1 shows that inline1 was compiled to assembly. Line #2 and Line #6 show that inline2 and inline3 were also compiled to assembly. Line #3 to line #5 show inlining. We can see that inline3 was merged into inline2. Similarly, line #8 and #9 show that inline2 was merged into inline1. So basically, all the methods were inlined into inline1. This means that once a certain threshold is crossed, we’ll no longer be making methods calls at all. This gives a significant performance boost.

### Which flags control inlining?

When you run a Java program, you can view the flags with which it ran using -XX:+PrintFlagsFinal. Let’s do that and look at a few flags of interest.

You’ll see a bunch of flags and their default values. The ones we are interested in are CompileThreshold, MaxInlineLevel, MaxInlineSize, and FreqInlineSize.

CompileThreshold is the number of invocations before compiling a method to native.
MaxInlineLevel is a limit on how deep you’d go before you stop inlining. The default value is 9. This means if we had method calls like inline1inline2 … ⟶ inline20, we’d only inline upto inline10. There after, we’d invoke inline11.
MaxInlineSize decides the maximum size of a method, in bytecodes, to be inlined. The default value is 35. This means that if the method to be inlined has mre than 35 bytecodes, it will not be inlined.
FreqInlineSize, in contrast, decides the maximum size of a hot method, in bytecodes, to be inlined. This is a platform-dependent value and on my machine it is 325.

You can tweak these flags to change how inlining behaves for your program.

### What is On Stack Replacement (OSR)?

When we make a method call, JVM pushes a stack frame. When a method is deemed hot, the JVM replaces the intrepreted version with the compiled version by replacing the old stack frame with a new one. This is done while the method is running. We saw OSR being indicated in our example. The % indicates that an OSR was made.

Let’s write some code to see OSR in action once again.

So this is a loop that will never terminate, right? Let’s run the program and see.

What just happened? When the JVM decided to perform an OSR, it saw that there was no use for the unused object and decided to set it to null, causing the WeakReference to return null and thus breaking the loop. When an OSR is performed, the method that is invoked doesn’t restart execution from the start. Rather, It continues from the “back-edge”. In our case, it would be the loop. Since the JVM saw that there was no use for the unused object after this back-edge, it was removed and the loop could terminate.

Being able to resume execution from the back-edge is very efficient. This means that once a method has been compiled to native code it can be used rightaway rather than at the next invocation of the method.

## Conclusion

To recap, we saw how JVM inlines code. Fusing the caller and the callee provides for improved performance since the overhead of method dispatch is avoided. We saw the flags which control inlining and we saw how JVM performs OSR.

Inlining is a very useful optimization because it forms the basis for other optimizations like escape analysis and dead code elimination.

## Motivation

My day job requires me to write code in Clojure. This means the code is eventually compiled to bytecode and run on the JVM. Intrigued by how the JVM does what it does, I decided to dig a little deeper and look at how it optimizes the code on the fly. In this series of posts I will be looking at JVM JIT (Just-In-Time) compiler.

## Myriad ways to run a program

Before I go into how JVM JIT works, I want to take a quick look at how interpreted and compiled languages work. For this post, I’ll take a look at the working of Python (an interpreted language) and C (a compiled language).

### Python

Python, by default, ships with CPython - the original Python interpreter that runs C code for every bytecode. There’s other implementations like IronPython or PyPy. IronPython turns Python into a fully compiled language running on top of Microsoft’s .NET Common Language Runtime (CLR) whereas PyPy turns Python into a JIT compiled language. For the sake of this post, however, I will look at CPython and how it works.

I’ll start with some code which will print the bytecodes for another Python file that is passed to it.

Next, here’s some code that’ll print numbers.

Now, let’s run the code and see the bytecodes we get.

Output:

The loop starts on line #4. For every element in the list, we’re pushing print and n onto the stack, calling the function, popping the stack, and repeating the loop. For each of the bytecodes, there’s associated C code i.e. FOR_ITER, STORE_NAME, etc. have associated C code.

This is a very simple way to run a program and also a very inefficient one. We’re repeating the stack operations and jumps over and over again. There’s no scope for optimizations like loop unrolling.

### C

In contrast to Python is C. All the C code is compiled to assembly ahead-of-time. Here’s a simple C program which will print “EVEN” if a number is even.

Next, let’s compile this code.

This will generate numbers.s. The assembly is fairly long so I’ll just cover the relevant parts.

Lines #2 - #3 show that if we’ve reached the limit of 10k, we’ll jump to LBB0_7 and the program ends.
If not, on line #5 we perform a signed division (idivl) and check if it is not zero. If it is not zero, we jump to LBB0_4 and print L_.str.1 which is just a whitespace.

We will always end up making this jump because we’ll never reach the condition where we have an even number. This is the problem with ahead-of-time compilation where you cannot speculate what the data is going to be and therefore you have to be ready to handle all the possibilities.

### JVM JIT

JVM JIT combines the best of both the worlds. When you execute your program the first time, the bytecodes are interpreted. As the code continues to execute, JVM collects statistics about it and the more frequently used code (“hot” code) is compiled to assembly. In addition, there are optimizations like loop unrolling. Loop unrolling looks like this:

Unrolling a loop helps avoid jumps and thus makes execution faster.

Also, since JVM collects statistics about code, it can make optimizations on the fly. For example, in the case where an even number is never reached, JVM can generate assembly code that’ll only have the else part of the branch.

## Conclusion

JVM does some fairly intersting optimizations under the hood. The aim of this series of posts is to cover as much of this as possible. We’ll start simple and build upon this as we go.

One of the nice things that you’ll come across in Clojure is transducer. In this post I’ll go over what transducers are, how you can use them, how you can make one, and what transducible contexts are.

## What are transducers?

In simple terms, transducers are composable transformation pipelines. A transducer does not care about where the input comes from or where the output will go; it simply cares about the transformation of the data that that flows through the pipeline. Let’s look at an example:

Here xf (external function) is our transducer which will increment every number and then will keep only the even numbers. Calling sequence functions like map, filter, etc. with single arity returns a transducer which you can then compose. The transducer doesn’t know where it will be used - will it be used with a collection or with a channel? So, a transducer captures the essence of your transformation. sequence is responsible for providing the input to the transducer. This is the context in which the transducer will run.

Here’s how the same thing can be done using threading macro:

The difference here is that the 2-arity version of map and filter will create intermediate collections while the 1-artity versions won’t. Transducers are much more efficient than threading together sequence functions.

Source for images

## Inside a transducer

Let’s look at the 1-arity version of map and see what makes a transducer.

When you call 1-arity version of map you get back a transducer which, as shown above, is a function. Functions like map, filter, etc. take a collection and return a collection. Transducers, on the otherhand, take one reducing function and return another. The function returned is expected to have three arities:

• 0-arity (init): This kickstarts the transformation pipeline. The only thing you do here is call the reducing function rf.
• 2-arity (step): This is where you'll perform the transformation. You get the result so far and the next input. In case of map, you call the reducung function rf by applying the function f to the input. How the value is going to be added to the result is the job of rf. If you don't want to add anything to the result, just return the result as-is. You may call rf once, multiple times, or not at all.
• 1-arity (end): This is called when the transducer is terminating. Here you must call rf exactly once and call the 1-arity version. This results in the production of the final value.
• So, the general form of a transducer is this:

## Using transducers

You can use a transducer in a context. There’s four contexts which come out of the box — into, transduce, sequence, and educe.

### into

The simplest way to use a transducer is to pass it to into. This will add your transformed elements to an already-existing collection after applying the transducer. In this example, we’re simply adding a range into a vector.

Internally, into calls transduce.

### transduce

transduce is similar to the standard reduce function but it also takes an additional xform as an argument.

### sequence

sequence lets you create a lazy sequence after applying a transducer. In contrast, into and transduce are eager.

### eduction

eduction lets you capture applying a transducer to a collection. The value returned is an iterable application of the transducer on the collection items which can then be passed to, say, reduce.

## Inside a transducible context

As mentioned before, transducers run in transducible contexts. The ones that come as a part of clojure.core would suffice most real-world needs and you’ll rarely see yourself writing new ones. Let’s look at transduce.

transduce is just like reduce. The 3-arity version expects an initial value to be supplied by calling the 0-arity version of the supplied function. The 4-arity version is slightly more involved. IReduceInit is an interface implemented by collections to let them provide an initial value. It lets a collection reduce itself. If not, the call goes to coll-reduce which is a faster way to reduce a collection than using first/next recursion.

## Stateful transducers

It’s possible for transducers to maintain reduction state.

Here’s a transducer which will multiply all the incoming numbers. We maintain state by using a Volatile. Whenever we get a new input we multiply it with the product and update the state of Volatile using vswap!. Let’s see this in action:

## Early Termination

The way the above transducer is written, it’ll process all the inputs even if one of the inputs is zero. We know that once we encounter a zero, we can safely end the reduction process and return a zero. reduced lets you return a reduced value and end the reduction. Let’s make a minor change to the above transducer and add in early termination.

In the 2-arity function, we check if the new-product is zero. If it is, we know we have a reduced value. We end the reduction by returning the result we have so far. Let’s see this in action:

## Conclusion

Transducers can be a very useful tool in your Clojure toolkit that let you process large collections, channels, etc. effectively by letting you make composable transformation pipelines that process one element at a time. They require a little getting used-to but once you’re past the learning curve, performance galore!

December 2017 marks my completing one year as a software engineer. This post is a restrospective where I list down the lessons I’ve learned, in no specific order of priority.

## Personal Life

### Learn to invest

We’re having too good a time today. We ain’t thinking about tomorrow.
— John Dillinger, Public Enemy

It’s easy to get carried away because of the fat pay cheque your fancy developer job brings you at the end of the month. Think of all the bling you can buy. This hedonistic attitude, however, does not help you hedge against the volatility of the job market. In his book Out of our Minds: Learning to be Creative, Ken Robinson states that secure life-long employment in a single job is a thing of the past. This applies even more if you work in a startup environment where things change in the blink of an eye.

Making proper investments and establishing a second flow of cash can help you get a grip on the situation when it gets rough. The simplest way to invest is to put your money in the stock market and let the dividends add to your monthly flow of cash. You do not have to be an active day trader managing your positions. You can very easily invest in mutual funds, index funds, etc. and they are not that difficult to begin with.

One of the best books I’ve been recommended by a friend of mine is Trading and Exchanges: Market Microstructure for Practitioners by Larry Harris. This will give you a very good overview of the market and all that you need to become a confident investor.

### Be ready to interview

Love your job but don’t love your company, because you may not know when your company stops loving you.
— A. P. J. Abdul Kalam, 11th President of India

The key takeaway of working in a startup environment is this: things change very rapidly and when the push comes to shove, you will be thrown overboard. Having seen this happen to people close to me, I’ve learned that you need to be ready to interview with other startups and/or companies as soon as you are fired or have resigned. This includes staying in touch with the fundamentals you’ve learned in your CS class like data structures and algorithms, and also knowing the technologies you’ve used in reasonable depth. Also, having a good network of developers goes a long way in easing your search for a new job. And don’t forget to sharpen the saw — do things that are not programming that make you a better programmer.

### Learn to negotiate

Dude, it’s five minutes. Let’s un-suck your negotiation.
— Patrick McKenzie

Learning how to negotiate is one of the most important skills that is often overlooked. It is overlooked because it is perceived to be cheap to negotiate for salary, or just plain avoiding difficult conversation. Whatever the case, get over it and learn the skill. As Patrick McKenzie mentions in his brilliant blog post Salary Negotiation: Make More Money, Be More Valued, all it takes is five minutes to finalize your salary. These five minutes have a lasting impact for alteast an year to come.

### Read a lot

Read, read, read.
— William Faulkner

I’ll admit that it is hard to take time out to read daily but keeping a dedicated slot of 30 minutes just for reading goes a long way. I make sure it’s a distraction-free slot with no dinging notifications and I try not to multitask. Another exercise that I do in conjunction with reading is trying to improve my retention of the material I’ve read by improving my recall memory. The best way to do this is to use Feynman technique where you eludicate what you’ve learned and pretend to teach it to a student.

I prefer keeping one programming and one non-programming book as a part of my daily reading. In addition, there’s a list of blogs, and papers that I read from time to time. I’ll probably post a list as a separate post.

## Engineering

### Understand Peter principle

The key to management is to get rid of the managers.
— Ricardo Semler

Laurence Peter came up with the concept that a person in a role keeps getting promoted based on their performance in their current role and not on the abilities that the role demands. It is quite possible to have a manager who doesn’t know how to do his job well i.e. he’s risen to his level of incompetence. This principle is important to understand as an engineer and is something that should make one reflect on one’s current skillset - do you possess the abilities to be in the role you are currently in or do you need to skill up? Again, this goes back to my point on sharpening the saw and doing non-programming things that make you a better developer.

### Tech debt is evil

Simplicity is hard work. But, there’s a huge payoff. The person who has a genuinely simpler system - a system made out of genuinely simple parts, is going to be able to affect the greatest change with the least work. He’s going to kick your ass. He’s gonna spend more time simplifying things up front and in the long haul he’s gonna wipe the plate with you because he’ll have that ability to change things when you’re struggling to push elephants around.
— Rich Hickey

When Ward Cunningham came up with the tech debt metaphor, he was referring to writing code despite having a poor understanding of the requirements. As time passes, whatever little understanding there was of the requirement fades away and the code is taken for granted. The definition of tech debt has since then come to represent poorly written code that later nobody understands and it’s taken for granted - something Ward Cunningham disagrees with.

The lethal combination is poorly written code for badly understood requirement(s) and you’ll come across this very often in startup environments. This comes with some pretty nasty ramifications like team in-fighting, and politics. In worst cases, it can bring the development of new features to a grinding halt.

Avoiding tech debt has to be a top priority for any startup that wants to grow. A lot of it revolves around establishing some processes to convey requirements among teams and ensuring that the resulting system design is simple. Like the quote by Rich Hickey shows, it is hard work but it will pay off in the longer run.

### Centralize your Logs

However, logging doesn’t seem to receive the same level of attention; consequently, developers find it hard to know the ‘what, when, and how’ of logging.
— Colin Eberhardt

Please stop asking developers to SSH into machines to read the logs. Centralize your logs by using ELK or if you want to avoid the hassle of setting up ELK, use a third party service like Fluentd or similar. A good centralized logging strategy will not only save you the pain of SSH-ing into multiple servers and grep-ing, it will also let you search through them easily. In addition, aggregating logs from various servers helps you identify patterns that may emerge by checking what’s happening on multiple servers in a specific time range.

clojure.spec is a standard, expressive, powerful, and integrated system for specification and testing. It lets you define the shape of your data, and place contraints on it. Once the shape, and constraints are defined, clojure.spec can then generate sample data which you can use to test your functions. In this post I’ll walk you through how you can use clojure.spec in conjunction with other libraries to write unit tests.

## Motivation

As developers, we are accustomed to writing example-based tests - we provide a known input, look at the resulting output, and assert that it matches our expectations. Although there is nothing wrong with this approach, there are a few drawbacks:

1. It is expensive as it takes longer to complete.
2. It is easier to miss out on the corner cases.
3. It is more prone to pesticide paradox.[1]

In contrast, clojure.spec allows you to do generative, property-based testing. Generative testing allows you to specify what kind of data you are looking for. This is done by using generators. A generator is a declarative description of possible inputs to a function.[2] Property-based testing allows you to specify how your program is supposed to behave, given an input. A property is a high-level specification of behavior that should hold for a range of inputs.[3]

## Setup

### Creating an App

We’ll begin by creating an app using lein and defining the dependencies. So go ahead and execute the following to create your project:

### Adding Dependencies

Next we’ll add a few dependencies. cd into clj-spec and open project.clj. Add the following to your :dependencies

clojure.spec comes as a part of Clojure 1.9 which, as of writing, isn’t out yet. If you’re on Clojure 1.8, as I am, you can use clojure-future-spec which will give you the same APIs. circleci/bond is a stubbing library which we’ll use to stub IO, network calls, database calls, etc. cloverage is the tool we’ll use to see the coverage of our tests.

## Using clojure.spec

### Simple Specs

Fire up a REPL by executing lein repl and require the required namespaces ;)

spec will let us define the shape of our data, and constraints on it. gen will let us generate the sample data.

Let’s write a simple spec which we can use to generate integers.

We’ve defined a spec ::n which will constrain the sample data to only be integers. Notice the use of double colons to create a namespace-qualified symbol; this is a requirement of the spec library. Now let’s generate some sample data.

s/gen takes a spec as an input and returns a generator which will produce conforming data. gen/generate exercises this generator to return a single sample value. You can produce multiple values by using gen/sample:

We could have done the same thing more succinctly by using the in-built functions as follows:

### Spec-ing Maps

Let’s say we have a map which represents a person and looks like this:

Let’s spec this.

We’ve defined ::name to be a string, and ::age to be an integer (positive or negative). You can make your specs as strict or as lenient as you choose. Finally, we define ::person to be a map which requires the keys ::name and ::age, albiet without namespace-qualification. Let’s see this in action:

By now you must have a fair idea of how you can spec your data and have sample values generated that match those specs. Next we’ll look at how we can do property-based testing with specs.

## Using test.check

test.check allows us to do property-based testing. Property-based tests make statements about the output of your code based on the input, and these statements are verified for many different possible inputs.[4]

### A Simple Function

We’ll begin by testing the simple function even-or-odd. We know that for all even numbers we should get :even and for all odd numbers we should get :odd. Let’s express this as a property of the function. Begin by require-ing a couple more namespaces.

Now for the actual property.

We have a generator which will create a vector of 0s and 1s only. We pass that vector as an input to our function. Additionally, we know that the number of 0s should equal the number of :evens returned and that the number of 1s should equal the number of :odds returned.

Next, let’s test this property.

Awesome! We ran the test a 100 times and passed. The added benefit is that the input generated will be different every time you run the test.

### Using bond

bond is a library which will let you stub side-effecting functions like database calls. We’ll require the namespce and modify our code to save even numbers to database.

First, the namespace.

Next, the code.

Now let’s update the property and stub save.

Notice how we’re using bond/with-stub and telling it to stub save function which calls the database. Later, we assert that the number of times that the databse was called is equal to the number of evens in the vector. Let’s verify the property.

Voilà! It works!

The last part of this post is about finding out test coverage using cloverage. For that, we’ll be moving our code to core.clj and writing test under the test directory.

## Using cloverage

To see cloverage in action, we’ll need to add our functions to core.clj. Here’s what it’ll look like:

Update your clj-spec.core-test to the following:

Here we are using the defspec macro to run the same property-based test a 100 times only this time we’ll run the test via command-line using lein. Execute the following command to run the test and see the coverage.

This will make use of cloverage to run the tests. -t denotes our test namespace and -n denotes the namespace for whom the tests are written. You’ll get an output like this:

Perfect!! Now we know how much coverage we have. The HTML file has a nice graphical representation of which lines we’ve covered with our tests.

## Conclusion

This brings us to the end of the post on using clojure.spec to write generative, property-based tests in both the REPL and source files. Generative testing automates the task of having to come up with examples for your tests. Where to go from here? Each of these libraries is pretty powerful in itself and will provide you with the necessary tools to write powerful, robust, and expressive tests that require minimal effort. So, head over to the offical docs to learn more.

So far we’ve looked at monoids and functors. The next algebraic data structure we’ll cover is a monad. If you’ve wondered what a monad is but never really understood it, this is the post for you. I am sure that you’ve used it without realizing it. So let’s get to it.

## Definition

A monad has more structure than a functor. This means that you can call map on it and that it obeys all the functor laws. In addition, a monad has a flatMap function which you can use to chain monads together. In essence, monads represent units of computation that you can chain together and the result of this chaining is also a monad.

Let’s look at a few examples.

## Example

The above code[1] uses a for comprehension to muliply elements of the list together. Under the hood, this gets translated to:

The compiler is making use of the List monad to chain operations together. Let’s break this down.

This part of the code will return a List since that is what calling map on a List does. Since we have two elements in first list, the result of mapping will generate two lists of two elements each. This isn’t what we want. We want a single list that combines the results together.

The flattening of results is what flatMap does - it takes the two lists and squishes them into one.

## Monad Laws

For something to be a monad, it has to obey the monadic laws. There’s three monad laws:

1. Left identity
2. Right identity
3. Associativity

### Left Identity

This law means that if we take a value, put it into a monad, and then flatMap it with a function f, that’s the same as simply applying the function f to the original value. Let’s see this in code:

### Right Identity

This law means that if we take a monad, flatMap it, and within that flatMap we try to create a monad out of it, then that’s the same as original monad. Let’s see this in code:

Let’s walkthrough this. The function to flatMap gets the elements of the original list, List(1, 2, 3), one-by-one. The result is List(List(1), List(2), List(3)). This is then flattened to create List(1, 2, 3), which is the original list.

### Associativity

This law states that if we apply a chain of functions to our monad, that’s the same as the composition of all the functions. Let’s see this in code:

## Conclusion

This brings us to the end of the post on monads and their laws. List isn’t the only monad in your arsenal. Options and Futures are monads, too. I suggest going ahead and constructing examples for monadic laws for them.

The next algebraic structure we’ll look at is a functor. In the introduction, we saw that a category consists of objects and arrows. As an example, we morphed a set of strings to another set which contained the reverse of those strings. In other words, we morphed an object to another object. What if we could morph an entire category to another category while preserving the structure? Well, that’s what a functor does.

## Formal Definition

Let $C$ and $D$ be categories. A functor $F: C \rightarrow D$ is a map taking each $C$-object $A$ to a $D$-object $F(A)$ and each $C$-arrow $f: A \rightarrow B$ to a $D$ arrow $F(f): F(A) \rightarrow F(B)$, such that all $C$-objects $A$ and composable $C$-arrows $f$ and $g$

1. $F(id_A) = id_{F(A)}$
2. $F(g \circ f) = F(g) \circ F(f)$

## Example

Say we have a set $S$. From this set we create another set $List(S)$ which contains finite lists of elements drawn from $S$. The functor we want maps from set to set. Since we know that a category contains objects and arrows, $List$ becomes the object part. The arrow part takes a function $f:S \rightarrow S^\prime$ to a function $List(f): List(S) \rightarrow List(S^\prime)$ that given a list $L = [s_1, s_2, s_3, … s_n]$ maps $f$ over elements of $L$

How does this translate to code? This actually translates fairly easily to code. Containers like lists, trees, etc. that you can call map on are functors.

Let’s write some code. We’ll begin by creating a set $S$.

Next, we’ll create $f: S \rightarrow S^\prime$.

Next, let’s create $L$.

Next, we’ll create the function maplist.

Finally, let’s see this in action:

As we can see, maplist applied the function f on all elements of L. We did this by using the map method of a List instance.

## Functor Laws

All functors are expected to obey the two laws that we saw in the formal definition. Let’s see how they translate to code.

### First Law

The first law states that if we map the identity function over a functor, we’ll get back a functor which is the same as the original functor.

As we can see, applying identity to the list gives back the same list.

### Second Law

The second law states that if we map a functor using a composition of two functions, $F(g \circ f)$, it’s the same as first mapping the functor using the first function and then mapping the resulting functor using the second function, $F(g) \circ F(f)$.

We’ll begin by creating two functions f and g.

Now let’s put the theory into practice.

As we see, the two lists are the same.

## More Functor Examples

### Example 1

Let’s consider a category where objects are integers. Arrows between objects indicates a “divided by” relationship. For example,

This indicates that 10 can be divided by 5. To reiterate, objects are numbers and arrows represent a “divided by” relationship.

Now let’s create a functor from the category to itself. This functor will multiply each object by 13. So, $F(10) = 130$. Is this a valid functor? We have $a \rightarrow b$ but is it true that $F(a) \rightarrow F(b)$?

The answer is yes. Our category has arrows that indicate a “divided by” relationship. So, $\frac{a}{b}$ will be an integer. Similarly, $\frac{13a}{13b}$ will also be an integer and maintain a “divided by” relationship. This shows that arrows do not always have to be functions. They can also indicate a relationship between their domain and codomain.

## Conclusion

In this post we saw functors which map objects from one category to another. Containers like trees, lists, etc. are functors. All functors are required to obey the two functor laws.

The first algebraic structure we’ll look at is a monoid. We’ve covered monoid previously in Scalaz Under the Hoods. In this post we’ll look at it again from a more abstract standpoint.

## Formal Definition

A monoid $(M, \bullet, e)$ is an underlying $M$ equipped with

1. a binary operation $\bullet$ from pairs of elements of $M$ into $M$ such that $(x \bullet y) \bullet z = x \bullet (y \bullet z)$ for all $x, y, z \in M$
2. an element $e$ such that $e \bullet x = x = x \bullet e$

We’ve already translated this definition to code. Just to recap, here’s what we wrote previously:

mappend is the binary operation $\bullet$, and mempty is the element $e$.

More concretely, we wrote this:

So, $\bullet$ translates to the addition operation $+$, and $e$ translates to $0$. That way, $0 + x = x = x + 0$ where $x$ is any integer. That was fairly easy to understand.

## Monoid Homomorphism

A monoid homomorphism from $(M, \bullet, e)$ to $(M^\prime, \bullet^\prime, e^\prime)$ is a function $f: M \rightarrow M^\prime$ such that

1. $f(e) = e^\prime$ and
2. $f(x \bullet y) = f(x) \bullet^\prime f(y)$.

The composition of two monoid homomorphisms is the same as their composition as functions on sets.

I know this is abstract so let’s have a look at a concrete example. Let’s write some code. We’ll be reusing the monoids that we previously wrote.

Next, we’ll write a homomorphism $f: M \rightarrow M^\prime$

Let’s see this in action. We’ll begin by testing the first rule.

So we see that the first rule is satisfied. Applying $f$ on the zero element of StringMonoid gives us the zero element of IntMonoid. Onto the second rule.

And we see that the second rule is also satisfied. Therefore, $f$ is a homomorphism such that $f: StringMonoid \rightarrow IntMonoid$. To recap, a monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid[1]. The monoid operation is still $+$ and the empty string is mapped to $0$, which is the zero/identity element of IntMonoid.

## Category with One Object

Suppose there’s a category $A$ with just one object in it. The identity arrow $id_A$ would point to itself. And the composition of this arrow with itself is $id_A$, which satisfies the associativity law. A monoid $(M, \bullet, e)$ may be represented as a category with a single object. The elements of M are represented as arrows from this object to itself, the identity element $e$ is represented as the identity arrow, and the operation $\bullet$ is represented as composition of arrows.

Any category with a single object is a monoid.

We’ve covered a lot of topics in Scalaz but before moving forward, I’d like to cover functors, monoids, monads, etc. These form the basis of functional programming and are predicated in category theory. This post is intended to be an introduction to category theory.

## What is Category Theory?

Category theory is a mathematical theory involving the study of categories. A category consists of a group of objects and transformations between them. Think of a category as a simple collection.[1]

Formally, a category $C$ consists of the following:

1. a collection of objects
2. a collection of arrows (called morphisms)
3. operations assigning each arrow $f$ an object $dom \space f$, its domain, and an object $cod \space f$, its codomain. We write this as $f: A \rightarrow B$
4. a composition operator assigning each pair of arrows $f$ and $g$, with $cod \space f = dom \space g$ a composite arrow $g \circ f: dom \space f \rightarrow cod \space g$, satisfying the associative law:
for any arrows $f: A \rightarrow B$, $g: B \rightarrow C$, and $h: C \rightarrow D$ (with $A$, $B$, $C$, and $D$ not necessarily distinct),
$h \circ (g \circ f) = (h \circ g) \circ f$
5. for each object $A$, an identity arrow $id_A: A \rightarrow A$ satisfying the identity law:
for any arrow $f: A \rightarrow B$,
$id_B \circ f = f$ and $f \circ id_A = f$

The formal definition above is taken verbatim from Basic Category Theory for Computer Scientists.

Let’s relate the diagram above[2] to the formal definition that we have. This simple category $C$ has three objects $A$, $B$, and $C$. There’s three identity arrows $id_A$, $id_B$, and $id_C$. These identity arrows satisfy the identity law. For example, $id_A \circ g = g$. Intuitively, if you were “standing” on $A$ and you first “walked along” the $id_A$ arrow and then “walked along” the $g$ arrow to reach $B$, it’s as good as just “walking along” $g$.

## A More Concrete Example

Let’s consider a category $S$ whose objects are sets. We’ll translate this into code and hold it to the laws stated above.

1. $S$ is a collection of sets i.e. each object is a set.
2. an arrow $f: A \rightarrow B$ is a morphism from set $A$ to set $B$
3. for each function $f$, we have $dom \space f = A$, and $cod \space f = B$
4. the composition of a function $f: A \rightarrow B$ with $g: B \rightarrow C$ is a function from $A$ to $C$ mapping each element $a \in A$ to $g(f(a)) \in C$
5. for each set $A$, the identity function $id_A$ is a function with domain and codomain as $A$.

### Code

Let’s begin by creating our first object of category $S$ - a set $A$.

Next, let’s define a function $f$ which morphs $A$ to $B$.

Next, let’s morph $A$ to $B$ by applying the function $f$

The domain of $f$ is the set $A$ where as codomain is the set of reversed strings, $B$.

Next, let’s define a function $g$

Now let’s compose $f$ and $g$

And finally, let’s create an identity function

Let’s see this in action

This is how we translate a category to code. In the coming posts we’ll cover more category theory.

In this post we’ll look at Memo which is a Scalaz goodie to add memoization to your program. We’ll recap what memoization is, write a recursive function to calculate Fibonacci numbers, and then add memoization to it using Memo.

## What is Memoization?

In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.[1]

## Fibonacci (without Memo)

So here’s the non-memoized recursive fibo which calculates the nth Fibonacci number. The issue here is that it’ll recalculate the Fibonacci numbers a lot of times and therefore cannot be used for large values of n.

## Fibonacci (with Memo)

Here’s the memoized version using Scalaz Memo. We are using an immutable, hash map-backed memo. The immutableHashMapMemo method takes a partial function defining how we construct the memo. In our case, if the value of n is not 0 or 1, we try looking up the value in the memo again. We recurse until we reach 0 or 1. Once that happens and our recursion returns, the resultant value is cached in an immutable.HashMap.

## Conclusion

Memoization is a great way to optimize your programs. In this post we used an immutable hash map memo. There are other types of memos applying different strategies to cache their results. The Memo companion object is the place to look for an appropriate memo that suits your needs.

In this post we’ll look at Lens which is a pure functional way of getting and setting data. Before we get into lenses, we’ll look at why we need lenses by looking at a simple example.

## Motivating Example

Say we have a class Person which has the name of the person and their address where the address is represented by Address. What we’d like to do is to change the address of the person. Let’s go ahead and create a Person.

Changing the Address while maintaining immutability is fairly easy.

The problem arises when things begin to nest. Let’s create an Order class representing an order placed by a Person.

Now, the person would like to change the address to which the items are delivered.

So, the deeper we nest, the uglier it gets. Lenses provide a succinct, functional way to do this.

## Lens

Lenses are a way of focusing on a specific part of a deep data structure. Think of them as fancy getters and setters for deep data structures. I’ll begin by demonstrating how we can create and use a lens and then explain the lens laws.

### Creating a Lens

What we’ve done is create a lens that accepts a Person object and focuses on its Address field. lensu expects two functions - a setter and a getter. In the first function, the setter, we’re making a copy of the Person object passed to the lens and updating its address field with the new one. In the second function, the getter, we’re simply returning the address field. Lets see this in action by getting and setting values.

### Getting a Field

Once you create a lens, you get a get method which returns the address field in the Person object.

### Setting a Field

Similarly, there’s a set method which lets you set fields to specific values.

### Modifying a Field

mod lets you modify the field. It expects a function that maps Address to Address. In the example here, we’re appending “NY” to the name of the city.

## Lenses are Composable

The true power of lenses is in composing them. You can compose two lenses together to look deeper into a data structure. For example, we’ll create a lens which lets us access the address field of the person in an Order. We’ll do this by composing two lenses.

Ignore the cmd. prefix to Order. That is just an Ammonite REPL quirk to avoid confusing with the Order trait from Scalaz. Next, we’ll combine the two lenses we have.

>=> is the symbolic alias for andThen. The way you read what we’ve done is: get the person from the order AND THEN get the address from that person.

This allows you to truly keep your code DRY. Now no matter within which data structure Person and Address are, you can reuse that lens to get and set those fields. It’s just a matter of creating another lens or few lenses to access the Person from a deep data structure.

Similarly there’s also compose which has a symbolic alias <=< and works in the other direction. I personally find it easier to use andThen / >=>.

## Lens Laws

Get-Put: If you get a value from a data structure and put it back in, the data structure stays unchanged.
Put-Get: If you put a value into a data structure and get it back out, you get the most updated value back.
Put-Put: If you put a value into a data structure and then you put another value in the data structure, it’s as if you only put the second value in.

Lenses that obey all the three laws are called “very well-behaved lenses”. You should always ensure that your lenses obey these rules.

Here’s how Scalaz represents these lens laws:

identity is get-put law, retention is put-get law, and doubleSet is put-put law.

## Lenses and State Monads

Formally, a state monad looks like the following:

Given a state S, it computes the resulting state S by making mutations to the existing state S and produces a resulting A. This is a bit abstract so let’s look at a scenario. Say we have a list of people whose addresses we’d like to update to Fancytown with zip code 3. Let’s do that using lenses.

### Creating a State

Here we are creating a state using a for comprehension. The %= operator accepts a function which maps an Address to an Address. What we get back is a state monad. Now that we have a state monad, let’s use it to update the address.

### Updating the State

Next, let’s make person p1 move to Fancytown.

Here we are updating person p1‘s address. What we get back is a new state S, p1 but with Fancytown address, and the result A, the new Address. state(p1) is the same as state.apply(p1). In short, we’re applying that state to a Person object.

## Conclusion

This brings us to the end the post on lenses. Lenses are a powerful way to get, set, and modify fields in your data structures. The best part about them is that they are reusable and can be composed to form lenses that focus deeper into the data structure.