# Introduction to Linear Regression

The first topic we’ll look at is linear regression. As mentioned previously, regression is about understanding dependence of one variable on another. We’ll start by building an intuitive understanding of what it is before we dive into the math-y details of it. We’ll cover the theory, learn to solve some problems by hand, look at the assumptions supporting it, then look at what happens if these assumptions aren’t held. Once this is done we’ll look at how we can do the same by using gradient descent.

## What is Regression Analysis?

Regression analysis is the study of the dependence of one variable (dependent variable, regressand) over one or more other variables (explanatory variables, regressors). The aim is to see how the dependent variable changes, on average, if there is a change in the explanatory variables. This is better explained with an example.

The plot shown here is of daily income and daily expenditure. The dependent variable here is the expenditure which depends on the explanatory variable income. As we can see from the plot, for every income level we have a range of expenditures. However, as the regression line shows, on average as income goes up, the expenditure goes up, too.

Why do we say that the income goes up on average? Let’s look at the raw data when daily income is 80 or 100.

The highest expenditure when income is 80 (75) is greater than the lowest expenditure when income level is 100 (65). If we look at the averages, the expenditure has gone up. There are a number of factors which may have caused the person with income 80 to expend 75 but as a whole, people with his daily income tend to expend less than those who earn 100 a day. The regression line passes through these averages.

That’s the basic idea behind regression analysis.

## The Math behind Regression Analysis

The regression line passes through the average expenditure for every given income level. In other words, it passes through the conditional expected value. This is read as “the expected value of expenditure (Y) given an income level ()”. Therefore, is a function of i.e. . The question now is: what is the function ?

We can start by assuming to be a straight line function which means that we’re assuming expenditure to be linearly related to income. So, . This is the slope-intercept form of a line where is the intercept and is the slope of the line. Both and are called the regression coefficients (or parameters). The function is called the population regression function.

Now given that we know how to calculate the average expenditure for a given income level, how do we calculate the expenditure of each individual at that income level? We can say that the individual’s expenditure is “off the average expenditure by a certain margin”. This can be written as . The term denotes how far off an individual’s expenditure is from the average and is called the stochastic error term.

No matter how good our regression model is, there is always going to be some inherent variability. There are factors other than income which define a person’s expenditure. Factors like gender, age, etc. affect the expenditure. All of these factors are subsumed into the stochastic error term.

When it comes to applying regression analysis, we won’t have access to the population data. Rather, we have access to sample data. Our task is to come up with sample regression coefficients such that they’ll be as close as possible to the population regression coefficients. The sample regression function is defined as:

Here , , , and are estimators of , , , and respectively. An estimator is simply a formula which tells us how to estimate the population parameter from the information provided by the sample data available to us.

## What it means to be ‘Linear’

Linearity can be defined as being either linear in parameters or linear in explanatory variables. To be linear in parameters means that all your parameters in the regression function will have power 1. By that definition, is linear. In contrast, is linear in explanatory variables since has power 1. We’ll be looking at models that are linear in parameters. Henceforth, “linear” would mean “linear in parameters”.

That’s it. This is what sets the foundation for further studying linear regression.

# The Machine Learning Notebook: Introduction

A while ago I’d started writing a series on machine learning titled ML for Newbies which I never got around to finishing. This series of posts is a redoing of the older series with greater mathematical, and theoretical rigor. Some of the content will be ported over from that series but most of it will be new content.

The aim of the series is to summarize everything that I know about machine learning in an approachable, intuituve manner, and also be notes for myself.

## What is Machine Learning?

ML is all about automated learning. It is about creating algorithms that can “learn” from the input given to them. These algorithms can then do a number of tasks such as filter out spam emails, predict the weather, etc. More formally, it’s about converting experience into expertise. The input to the algorithm become its “experience” and the task that it performs becomes its “expertise”.

Although ML is a sub-field of AI (Artificial Intelligence), they’re different. AI focuses on replicating human intelligence whereas ML focuses on complementing human intelligence by doing tasks that fall beyond human capabilities like looking for patterns in massive data sets.

## How does a Machine Learn?

Like I mentioned earlier, a machine learns from the input data given to it. The way the machine learns can also be classified as supervised, unsupervised, active, or passive.

### Supervised Learning

Let’s say your task is to build a spam filter. You have a large data set of emails, each of which is marked as either “spam” or “not spam”. You’ll feed this data set to your algorithm and it’ll learn from it. Later, you’ll input an email and it’ll tell you if it is spam or not. This type of learning is called supervised learning. It is called “supervised” because the algorithm was first taught what is spam and what isn’t by a data set generated manually. You had to supervise its learning process.

### Unsupervised Learning

Let’s stay with the spam filter example. Let’s now suppose that the emails in the data set are not marked as “spam” or “not spam”. It’s the task of the algorithm to figure it out somehow. Such learning is called unsupervised learning.

### Passive Learning

In passive learning, the algorithm only observes the data provided to it and learns from it, without influencing it in anyway. Suppose now that the spam filter starts out blank with no notion of what is and what isn’t spam. It’s up to the user to mark the incoming emails so that the algorithm can learn from it. This is passive learning.

### Active Learning

In contrast, let’s say that the spam filter now looked at the email first, and if it found something suspicious, asked the user to mark it as spam or not. This is active learning.

## What can ML do?

There are a number of ML algorithms, each focusing on a different task. These algorithms can do a number of things like:

### Clustering

In simple terms, clustering is grouping. It’s an unsupervised form of learning in which the algorithm looks at the data and groups the similar pieces of together based on some similarity measure. In the spam filter examples above, when the algorithm, on its own, figured out which of the emails are spam, it clustered them together.

### Classification

Classification is about finding out which group a piece of data belongs to. It’s a form of supervised learning because the algorithm needs to know beforehand what each of the group looks like before it can be asked to classify a piece of data. When the spam filter was given a data set with marked emails, all it had to do was to answer the question “is this email spam?”. Basically, it was asked to classify that email into a group.

### Regression

Regression is about understanding relationships between variables. If one variable changes, how would it affect the other? For example, if the amount of power needed for a phone to stay on increased, how would it affect battery life?

### Association Rules

Finding out association rules in data sets is also about finding out relationships between variables. Suppose you have a large data set from an online store, association rules is about finding out what items you are likely to buy based on what you’ve previously bought. For example, if you buy milk and sugar, you’re likely to buy eggs. Your rule then becomes {milk, sugar} => eggs.

For now, don’t worry about the theoretical rigor of these. We’ll get to that in subsequent posts. That’s all for now. I hope this post was motivation enough for you to start learning ML. :)

# Implementing TSum: An Algorithm for Table Summarization

So, I am taking a break from my JVM JIT series to write this post about a table summarization algorithm that I had implemented way back in my college days. Simply put, the algorithm finds the most descriptive patterns in the table that succinctly convey the meaning of the rows contained i.e. summarize it. This post will focus on what the algorithm is and how it’s implemented in JavaScript. Take this post with a grain of salt because the implementation was a part of my college assignment and hasn’t been thoroughly checked for correctness.

### What is TSum?

Given a table where rows correspond to records and columns correspond to attributes, we want to find a small number of patterns that succinctly summarize the dataset. … TSum, a method that provides a sequence of patterns ordered by their “representativeness.” It can decide both which these patterns are, as well as how many are necessary to properly summarize the data.

An algorithm like this is useful in situations where the volume of data is so large that it’d be impossible for a human to deduce anything from it simply by looking at the data. For example, given patient records, a description like “most patients are middle-aged men with cholestrol, followed by child patients with chickenpox” provides a very human-readable summary of the data.

The algorithm views finding the best pattern as a compression problem — the best pattern will give the best compression.

### Definitions

#### Pattern and Pattern Size

A pattern is a tuple where can can be either a specific value or a “don’t care” value represented by . The number of matching attributes in a pattern is called the size of the pattern and is denoted by . An attribute is considered matching if and only if its value doesn’t equal “don’t care” value.

#### Pattern List and Pattern Set

A pattern list is an ordered sequence of patterns while a pattern set is a set of patterns with no order.

#### Compression Saving

Compression saving of a pattern on a table , denoted as is the amount of compression it can achieve. Specifically, it is the difference of bits between he uncompressed and compressed representations of the data records covered by the pattern . Let be the number of records covered by pattern and be the number of attributes in . Then,

where

and

is the average number of bits in the ith attribute.
is the number of attributes.

#### Residue Data Table

Given a table and a pattern collection , the residue data table contains the records that are not covered by any of the patterns in .

#### Pattern Marhsalling

Given a set of patterns , the pattern marshalling algorithm picks a pattern from which has the highest compression saving. After every iteration, the records in the table which have been covered by the patterns chosen so far will be removed from consideration.

### Generating Patterns - Local Expansion

Local expansion is a pattern generation strategy that tries to “grow” the patterns to increase the compression savings on the data record. In this approach, the algorithm will start with single attributes first and and find a single-condition pattern that has the best compression saving, and then expand the pattern by adding other conditions until the compression cost cannot be improved. To find the next pattern, the same procedure is repeated, but only on the residue data table - the part of the table not covered by any of the patterns found so far.

### Code

We’ll start with simpler code first. Let’s start by writing a function to calculate the benefit.

The actual translation of formula to code happens from line #25. We start by finding N which is the number of records in the table that match the pattern. Im using lodash‘s filter function to avoid the boilerplate of having to find the matching records myself. W is the accumulator in which I will sum the number of bits each attribute in the pattern take. attributes are the attributes in the pattern. Then for each attribute, we use bitsFor and add it up with the value of W. Finally, we return the value according to the formula.

Simply put, benefit is times the total number of bits each attribute in the pattern would take.

Next, let’s write code to find

This is a fairly straightforward translation of the formula to code and needs no explanation.

We now have all the code we need to calculate overhead so let’s do that.

Notice that I start by require-ing the log.js module we just saw. I am using filter to find the number of rows in the table that match the pattern. On the next line I find the number of attributes in a given record of the table. Since the algorithm assumes no null / empty values for any attribute, we can safely pick up the 0th record and see how many attributes it contains.

Now that we have benefit and overhead in place, let’s calculate saving

Now let’s write code for pattern marshalling.

Given a set of patterns and a table , we start by making a copy of the patterns passed to us. The aim is to select a pattern which gives us non-negative compression. If we find such a pattern, we add it to our patterns list and remove the chosen pattern from the current list of patterns. We continue the next itreation on the residue table. We repeat this until we have no more patterns to consider.

The next piece of the puzzle is to write code for local expansion.

As mentioned in the definitions, local expansion grows a pattern, starting with an empty pattern. The expansion is done in expand function.

The final piece of code we need to look at is to calculate the coverage i.e. how much of the data is described by a given pattern.

Now that we have all the machinery in place, let’s write a simple test. We’ll take the patients example given in the paper and turn it into JSON objects. Then we’ll write a simple script to run our code using this data and check whether the results make sense.

Here’s the data:

Here’s the test:

Now let’s run this:

The output says that the most descriptive pattern is “adult male” which makes up 50% of the rows followed by “children with low blood pressure” which make up 30% of the rows. if we look at the sample data, out of the 10 rows, 5 are “adult male” and 3 are “children with low blood pressure”. So the output of the algorithm checks out.

Finito.

# JVM JIT - Loop Unrolling

In the previous post we looked at how JVM generates assembly code and how we can take a look at it. In this post we will look at yet another optimization technique called loop unrolling.

### What is loop unrolling?

the Server VM features loop unrolling, a standard compiler optimization that enables faster loop execution. Loop unrolling increases the loop body size while simultaneously decreasing the number of iterations.

Okay so it is an optimization that makes our loops faster by increasing the body size, and reducing the iterations. This is much better explained by example.

A loop like this:

can be unrolled into this:

So the size of the loop has increased because instead of calling method S just once per iteration, we call it 4x. The iterations have reduced because we now have a stride of 4 (i += 4). This is a space-time tradeoff because you gain speed by increasing the size of the program. If this for loop were to be a part of a hot method that got compiled to assembly, more code cache would be consumed because of unrolling.

Loop unrolling, however, wins at a different front — speed. By increasing the stride of the loop, we’re reducing the number of jumps that the CPU has to make. Jumps are costly and a reduction in jumps is a huge performance boost.

### Loop unrolling in action

The code above is a JMH benchmark which will fork 1 JVM (@Fork), warm it for 10 iterations (@Warmup), then run our benchmark for 5 iterations (@Measurement). The generated assembly looks like this:

So the first block of code is moving 0x42 to some memory location (line #3), incrementing the counter %ebp (line #4), and jumping back to the start of the loop for the next iteration (line #6). Then there is the second block which increments the counter by 4 (line #11) and uses vmovdqu (line #10).

In the second block, vmovdqu moves %xmm0 (a 128-bit register) to some memory location (line #10) i.e. we’re writing 4 32-bit numbers in one go. This is equivalent of writing nums[0 .. 3] in one iteration.

This means that our entire loop which writes 10 array elements will not run for 10 iterations at all. It will run for 4 — 2 iterations to write 8 elements in groups of 4 each (block 2), and 2 more to write the remaining 2 elements (block 1).

What we saw above was loop unrolling to enable vectorization. Since this happened without any effort on our part, it’s called auto-vectorization.

### What is vectorization?

There are certain CPU instructions which are capable of operating on multiple data elements simultaneously. Such instructions are called SIMD instructions — Single Instruction Multiple Data or vectorized instructions. In the example above, all 4 elements were packed into a single register XMM0 and then moved to their memory location with vmovdqu. This results in faster processing by saving jumps and clock cycles.

Auto-vectorization is when the compiler converts the scalar instructions (which operate on a single data element at a time) to vector instructions (which operate on multiple data elements at a time) without any effort on the part of the programmer.

Vectorizing the code enables superword level parallelism (SLP). SLP is a type of SIMD parallelism in which source and result operands are packed in a storage location.

The example shows statement packing in which the operands have been packed into registers and the scalar addition and multiplication have been replaced by their vectorized counterparts.

### How does loop unrolling enable SLP?

Consider this loop:

This is a loop containing isomorphic statements — statements that contain same operations in the same order. Such statements are easy to unroll. So the loop above can be transformed as follows:

This can be further transformed as:

Now it becomes easy to see how SLP can be used to speed up the execution of the loop. Loop unrolling sets the stage for SLP.

### Which flags control loop unrolling and SLP?

There are a couple of flags - -XX:LoopUnrollLimit and -XX:+UseSuperWord. LoopUnrollLimit controls how many times your loop will be unrolled and UseSuperWord controls the transformation of scalar operations into vectorized operations.

### Conclusion

Loop unrolling makes your code faster by doing more per iteration and increasing the stride. It’s a space-time trade off where you make your code larger so that the execution takes less time. Loop unrolling can be taken a step further to make scalar instructions into vector instructions. These instructions operate on packed data and achieve more per instruction than their scalar counterparts.

The conversion from scalar instructions to vector instructions is called vectorization. Since this happens transparently to the programmer, it is called auto-vectorization.

There are a few blog posts that I recommend you read.

A fundamental introduction to x86 assembly programming
This post provides a brief introduction to x86 assembly programming. Having studied assembly programming in university a long time ago, the post is quick refresher on the basic concepts of assembly programming like addresing modes, registers, etc. I keep it handy and refer to it when I have to read the assembly code generated by JVM.

JMH - Java Microbenchmark Harness
This post provides a brief introduction to JMH (Java Mirobenchmark Harness). What does JMH do?

JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM.

The post will show you how to write a simple benchmark, how to avoid common pitfalls, etc. It is a good read regardless of this series for two reasons: one, microbenchmarking is coming to Java 9 out-of-the-box and two, microbenchmarking is hard to get right.

Exploiting Superword Level Parallelism with Multimedia Instruction Sets
This is the research paper on superword level parallelism. I’ve borrowed examples and diagrams from this paper. The paper will introduce you to what SLP is, and also provide an algorithm on how to find parts of code which are amenable to SLP.

# JVM JIT - Compiling to Assembly

In the previous post we saw how JIT inlining works. We also saw how the JVM performs OSR to replace the interpreted version of the method to the compiled version on the fly. In this post we’ll dig even deeper and see the assembly code that is generated when the method gets compiled.

## Prerequisites

The flag which enables us to see assembly code is -XX:+PrintAssembly. However, viewing assembly code does not work out of the box. You’ll need to have the disassembler on your path. You’ll need to get hsdis (HotSpot Disassembler) and build it for your system. There’s a prebuilt version available for Mac and that’s the one I am going to use.

Once we have that, we’ll add it to LD_LIBRARY_PATH.

Now we’re all set to see how JVM generates assembly code.

## Printing assembly code

We’ll reuse the same inlining code from last time:

-XX:+PrintAssembly is a diagnostic flag so we’ll need to unlock JVM’s disgnostic options first. Here’s how:

This will generate a lot of assembly code. We will, however, look at the assembly code generated for inline1.

So this is the assembly code that we get when we run the program. It’s a lot to grok in one go so let’s break it down.

Line #7 and #8 are self explanatory; they show which method we’re looking at. Line #9 and #10 (and #13 to #17) are for thread synchronization. The JVM can get rid of thread synchronization if it sees that there is no need for it (lock eliding) but since we are using static methods here, it needs to add code for synchronization. It doesn’t know that we only have only one thread running.

Our actual program is on line #11 where we are moving the value 4 to %eax register. This is the register which holds, by convention, the return value for our methods. This shows that the JVM has optimized our code. Our call chain was inline1inline2inline3 and it was inline3 which returned 4. However, JVM is smart enough to see that these method calls are superfluous and decided to get rid of them. Very nifty!

Line #21 to #23 has code to handle exceptions. We know there won’t be any exceptions but the JVM doesn’t so it has to be prepared to deal with that.

And finally, there’s code to deoptimize. In addition to static optimizations, there are some optimizations that the JVM makes which are speculative. This means that the JVM generates assembly code expecting things to go a certain way after it has profiled the interpreted code. However, if the speculation is wrong, the JVM can go back to running the interpreted version.

## Which flags control compilation?

-XX:CompileThreshold is the flag which controls the number of call / branch invocations after which the JVM compiles bytecodes to assembly. You can use -XX:+PrintFlagsFinal to see the value. By default it is 10000.

Compiling a method to assembly depends on two factors: the number of times that method has been invoked (method entry counter) and the number of times a loop has been executed (back-edge counter). Once the sum of the two counters is above CompileThreshold, the method will be compiled to assembly.

Maintaining the two counters separately is very useful. If the back-edge counter alone exceeds the threshold, the JVM can compile just the loop (and not the entire method) to assembly. It will perform an OSR and start using the compiled version of the loop while the loop is executing instead of waiting for the next method invocation. When the method is invoked the next time around, it’ll use the compiled version of the code.

So since compiled code is better than interpreted code, and CompileThreshold controls when a method will be compiled to assembly, reducing the CompileThreshold would mean we have a lot more assembly code.

There is one advantage to reducing the CompileThreshold - it will reduce the time taken for the branches / methods to be deemed hot i.e. reduce the JVM warmup time.

In older JDKs, there was another reason to reduce CompileThreshold. The method entry and back-edge counters would decay at every safepoint. This would mean that some methods would not compile to assembly since the counters kept decaying. These are the “lukewarm” methods that never became hot. With JDK 8+, the counters no longer decay at safepoints so there won’t be any lukewarm methods.

In addition, JDK 8+ come with tiered compilation enabled and the CompileThreshold is ignored. The idea of there being a “compile threshold”, though, does not change. I’m defering the topic of tiered compilation for the sake of simplicity.

## Where is the compiled code stored?

The compiled code is stored in JVM’s code cache. As more methods become hot, the cache starts to get filled. Once the cache is filled, the JVM can no longer compile anything to assembly and will resort to purely interpreteting the bytecodes.

The size of code cache is platform dependent.

Also, JVM ensures that the access to cache is optimized. The hlt instructions in the assembly code exist for aligning the addresses. It is much more efficient for the CPU to read from even addresses than it is to read from odd addresses in memory. The hlt instructions ensure that the code is at an even address in memory.

## Which flags control code cache size?

There are two flags which are important in setting the code cache size - InitialCodeCacheSize and ReservedCodeCacheSize. The first flag indicates the code cache size the JVM will start with and the latter indicates the size to which the code cache can grow. With JDK 8+, ReservedCodeCacheSize is large enough so you don’t need to set it explicitly. On my machine it is 240 MB (5x what it is for Java 7, 48 MB).

## Conclusion

The JVM compiles hot code to assembly and stores it at even addresses in it’s code cache for faster access. Executing assembly code is much more efficient than interpreting the bytecodes. You don’t really need to look at the assembly code generated everyday but knowing what is generated as your code executes gives you an insight into what the JVM does to make your code run faster.

# JVM JIT - Inlining

In the previous post we looked at how interpreted and compiled languages work. To recap, an interpreter works by generating assembly code for every bytecode it encounters. This is a very simple way to execute a program and also a very slow one. It ends up redoing a lot of translation from bytecode to assembly. Also, this simplistic approach means that the interpreter cannot do optimizations as it executes the bytecodes. Then there are compilers which produce assembly ahead-of-time. This overcomes having to generate assembly again and again but once the assembly is generated it cannot be changed on the fly.

JVM comes with both an interpreter and a compiler. When the execution of the code begins, the bytecodes are interpreted. For the sake of this series, I’ll be looking at Oracle HotSpot JVM which looks for “hot spots” in the code as the bytecodes get interpreted. These are the parts of the code which are most frequently executed and the performance of the application depends on these. Once the code is identified as “hot”, JVM can go from interpreting the code to compiling it to assembly i.e. the code is compiled “just-in-time”. In addition, since the code is being profiled as it is run, the compiled code is optimized.

In this post we’ll look at one such optimization: inlining.

## Inlining

Inlining is an optimization where the call to a method is replaced by the body of the called method i.e. at the call site, the caller and the callee are melded together. When a method is called, the JVM has to push a stack frame so that it can resume from where it left off after the called method has finished executing. Inlining improves performance since JVM will not have to push a stack frame.

Next, let’s compile and run the code.

Output:

Line #1 shows that inline1 was compiled to assembly. Line #2 and Line #6 show that inline2 and inline3 were also compiled to assembly. Line #3 to line #5 show inlining. We can see that inline3 was merged into inline2. Similarly, line #8 and #9 show that inline2 was merged into inline1. So basically, all the methods were inlined into inline1. This means that once a certain threshold is crossed, we’ll no longer be making methods calls at all. This gives a significant performance boost.

### Which flags control inlining?

When you run a Java program, you can view the flags with which it ran using -XX:+PrintFlagsFinal. Let’s do that and look at a few flags of interest.

You’ll see a bunch of flags and their default values. The ones we are interested in are CompileThreshold, MaxInlineLevel, MaxInlineSize, and FreqInlineSize.

CompileThreshold is the number of invocations before compiling a method to native.
MaxInlineLevel is a limit on how deep you’d go before you stop inlining. The default value is 9. This means if we had method calls like inline1inline2 … ⟶ inline20, we’d only inline upto inline10. There after, we’d invoke inline11.
MaxInlineSize decides the maximum size of a method, in bytecodes, to be inlined. The default value is 35. This means that if the method to be inlined has mre than 35 bytecodes, it will not be inlined.
FreqInlineSize, in contrast, decides the maximum size of a hot method, in bytecodes, to be inlined. This is a platform-dependent value and on my machine it is 325.

You can tweak these flags to change how inlining behaves for your program.

### What is On Stack Replacement (OSR)?

When we make a method call, JVM pushes a stack frame. When a method is deemed hot, the JVM replaces the intrepreted version with the compiled version by replacing the old stack frame with a new one. This is done while the method is running. We saw OSR being indicated in our example. The % indicates that an OSR was made.

Let’s write some code to see OSR in action once again.

So this is a loop that will never terminate, right? Let’s run the program and see.

What just happened? When the JVM decided to perform an OSR, it saw that there was no use for the unused object and decided to set it to null, causing the WeakReference to return null and thus breaking the loop. When an OSR is performed, the method that is invoked doesn’t restart execution from the start. Rather, It continues from the “back-edge”. In our case, it would be the loop. Since the JVM saw that there was no use for the unused object after this back-edge, it was removed and the loop could terminate.

Being able to resume execution from the back-edge is very efficient. This means that once a method has been compiled to native code it can be used rightaway rather than at the next invocation of the method.

## Conclusion

To recap, we saw how JVM inlines code. Fusing the caller and the callee provides for improved performance since the overhead of method dispatch is avoided. We saw the flags which control inlining and we saw how JVM performs OSR.

Inlining is a very useful optimization because it forms the basis for other optimizations like escape analysis and dead code elimination.

# JVM JIT - Introduction

## Motivation

My day job requires me to write code in Clojure. This means the code is eventually compiled to bytecode and run on the JVM. Intrigued by how the JVM does what it does, I decided to dig a little deeper and look at how it optimizes the code on the fly. In this series of posts I will be looking at JVM JIT (Just-In-Time) compiler.

## Myriad ways to run a program

Before I go into how JVM JIT works, I want to take a quick look at how interpreted and compiled languages work. For this post, I’ll take a look at the working of Python (an interpreted language) and C (a compiled language).

### Python

Python, by default, ships with CPython - the original Python interpreter that runs C code for every bytecode. There’s other implementations like IronPython or PyPy. IronPython turns Python into a fully compiled language running on top of Microsoft’s .NET Common Language Runtime (CLR) whereas PyPy turns Python into a JIT compiled language. For the sake of this post, however, I will look at CPython and how it works.

I’ll start with some code which will print the bytecodes for another Python file that is passed to it.

Next, here’s some code that’ll print numbers.

Now, let’s run the code and see the bytecodes we get.

Output:

The loop starts on line #4. For every element in the list, we’re pushing print and n onto the stack, calling the function, popping the stack, and repeating the loop. For each of the bytecodes, there’s associated C code i.e. FOR_ITER, STORE_NAME, etc. have associated C code.

This is a very simple way to run a program and also a very inefficient one. We’re repeating the stack operations and jumps over and over again. There’s no scope for optimizations like loop unrolling.

### C

In contrast to Python is C. All the C code is compiled to assembly ahead-of-time. Here’s a simple C program which will print “EVEN” if a number is even.

Next, let’s compile this code.

This will generate numbers.s. The assembly is fairly long so I’ll just cover the relevant parts.

Lines #2 - #3 show that if we’ve reached the limit of 10k, we’ll jump to LBB0_7 and the program ends.
If not, on line #5 we perform a signed division (idivl) and check if it is not zero. If it is not zero, we jump to LBB0_4 and print L_.str.1 which is just a whitespace.

We will always end up making this jump because we’ll never reach the condition where we have an even number. This is the problem with ahead-of-time compilation where you cannot speculate what the data is going to be and therefore you have to be ready to handle all the possibilities.

### JVM JIT

JVM JIT combines the best of both the worlds. When you execute your program the first time, the bytecodes are interpreted. As the code continues to execute, JVM collects statistics about it and the more frequently used code (“hot” code) is compiled to assembly. In addition, there are optimizations like loop unrolling. Loop unrolling looks like this:

Unrolling a loop helps avoid jumps and thus makes execution faster.

Also, since JVM collects statistics about code, it can make optimizations on the fly. For example, in the case where an even number is never reached, JVM can generate assembly code that’ll only have the else part of the branch.

## Conclusion

JVM does some fairly intersting optimizations under the hood. The aim of this series of posts is to cover as much of this as possible. We’ll start simple and build upon this as we go.

# Transducers

One of the nice things that you’ll come across in Clojure is transducer. In this post I’ll go over what transducers are, how you can use them, how you can make one, and what transducible contexts are.

## What are transducers?

In simple terms, transducers are composable transformation pipelines. A transducer does not care about where the input comes from or where the output will go; it simply cares about the transformation of the data that that flows through the pipeline. Let’s look at an example:

Here xf (external function) is our transducer which will increment every number and then will keep only the even numbers. Calling sequence functions like map, filter, etc. with single arity returns a transducer which you can then compose. The transducer doesn’t know where it will be used - will it be used with a collection or with a channel? So, a transducer captures the essence of your transformation. sequence is responsible for providing the input to the transducer. This is the context in which the transducer will run.

Here’s how the same thing can be done using threading macro:

The difference here is that the 2-arity version of map and filter will create intermediate collections while the 1-artity versions won’t. Transducers are much more efficient than threading together sequence functions.

Source for images

## Inside a transducer

Let’s look at the 1-arity version of map and see what makes a transducer.

When you call 1-arity version of map you get back a transducer which, as shown above, is a function. Functions like map, filter, etc. take a collection and return a collection. Transducers, on the otherhand, take one reducing function and return another. The function returned is expected to have three arities:

• 0-arity (init): This kickstarts the transformation pipeline. The only thing you do here is call the reducing function rf.
• 2-arity (step): This is where you'll perform the transformation. You get the result so far and the next input. In case of map, you call the reducung function rf by applying the function f to the input. How the value is going to be added to the result is the job of rf. If you don't want to add anything to the result, just return the result as-is. You may call rf once, multiple times, or not at all.
• 1-arity (end): This is called when the transducer is terminating. Here you must call rf exactly once and call the 1-arity version. This results in the production of the final value.
• So, the general form of a transducer is this:

## Using transducers

You can use a transducer in a context. There’s four contexts which come out of the box — into, transduce, sequence, and educe.

### into

The simplest way to use a transducer is to pass it to into. This will add your transformed elements to an already-existing collection after applying the transducer. In this example, we’re simply adding a range into a vector.

Internally, into calls transduce.

### transduce

transduce is similar to the standard reduce function but it also takes an additional xform as an argument.

### sequence

sequence lets you create a lazy sequence after applying a transducer. In contrast, into and transduce are eager.

### eduction

eduction lets you capture applying a transducer to a collection. The value returned is an iterable application of the transducer on the collection items which can then be passed to, say, reduce.

## Inside a transducible context

As mentioned before, transducers run in transducible contexts. The ones that come as a part of clojure.core would suffice most real-world needs and you’ll rarely see yourself writing new ones. Let’s look at transduce.

transduce is just like reduce. The 3-arity version expects an initial value to be supplied by calling the 0-arity version of the supplied function. The 4-arity version is slightly more involved. IReduceInit is an interface implemented by collections to let them provide an initial value. It lets a collection reduce itself. If not, the call goes to coll-reduce which is a faster way to reduce a collection than using first/next recursion.

## Stateful transducers

It’s possible for transducers to maintain reduction state.

Here’s a transducer which will multiply all the incoming numbers. We maintain state by using a Volatile. Whenever we get a new input we multiply it with the product and update the state of Volatile using vswap!. Let’s see this in action:

## Early Termination

The way the above transducer is written, it’ll process all the inputs even if one of the inputs is zero. We know that once we encounter a zero, we can safely end the reduction process and return a zero. reduced lets you return a reduced value and end the reduction. Let’s make a minor change to the above transducer and add in early termination.

In the 2-arity function, we check if the new-product is zero. If it is, we know we have a reduced value. We end the reduction by returning the result we have so far. Let’s see this in action:

## Conclusion

Transducers can be a very useful tool in your Clojure toolkit that let you process large collections, channels, etc. effectively by letting you make composable transformation pipelines that process one element at a time. They require a little getting used-to but once you’re past the learning curve, performance galore!

# One Year as a Software Engineer - a Retrospective

December 2017 marks my completing one year as a software engineer. This post is a restrospective where I list down the lessons I’ve learned, in no specific order of priority.

## Personal Life

### Learn to invest

We’re having too good a time today. We ain’t thinking about tomorrow.
— John Dillinger, Public Enemy

It’s easy to get carried away because of the fat pay cheque your fancy developer job brings you at the end of the month. Think of all the bling you can buy. This hedonistic attitude, however, does not help you hedge against the volatility of the job market. In his book Out of our Minds: Learning to be Creative, Ken Robinson states that secure life-long employment in a single job is a thing of the past. This applies even more if you work in a startup environment where things change in the blink of an eye.

Making proper investments and establishing a second flow of cash can help you get a grip on the situation when it gets rough. The simplest way to invest is to put your money in the stock market and let the dividends add to your monthly flow of cash. You do not have to be an active day trader managing your positions. You can very easily invest in mutual funds, index funds, etc. and they are not that difficult to begin with.

One of the best books I’ve been recommended by a friend of mine is Trading and Exchanges: Market Microstructure for Practitioners by Larry Harris. This will give you a very good overview of the market and all that you need to become a confident investor.

Love your job but don’t love your company, because you may not know when your company stops loving you.
— A. P. J. Abdul Kalam, 11th President of India

The key takeaway of working in a startup environment is this: things change very rapidly and when the push comes to shove, you will be thrown overboard. Having seen this happen to people close to me, I’ve learned that you need to be ready to interview with other startups and/or companies as soon as you are fired or have resigned. This includes staying in touch with the fundamentals you’ve learned in your CS class like data structures and algorithms, and also knowing the technologies you’ve used in reasonable depth. Also, having a good network of developers goes a long way in easing your search for a new job. And don’t forget to sharpen the saw — do things that are not programming that make you a better programmer.

### Learn to negotiate

Dude, it’s five minutes. Let’s un-suck your negotiation.
— Patrick McKenzie

Learning how to negotiate is one of the most important skills that is often overlooked. It is overlooked because it is perceived to be cheap to negotiate for salary, or just plain avoiding difficult conversation. Whatever the case, get over it and learn the skill. As Patrick McKenzie mentions in his brilliant blog post Salary Negotiation: Make More Money, Be More Valued, all it takes is five minutes to finalize your salary. These five minutes have a lasting impact for alteast an year to come.

— William Faulkner

I’ll admit that it is hard to take time out to read daily but keeping a dedicated slot of 30 minutes just for reading goes a long way. I make sure it’s a distraction-free slot with no dinging notifications and I try not to multitask. Another exercise that I do in conjunction with reading is trying to improve my retention of the material I’ve read by improving my recall memory. The best way to do this is to use Feynman technique where you eludicate what you’ve learned and pretend to teach it to a student.

I prefer keeping one programming and one non-programming book as a part of my daily reading. In addition, there’s a list of blogs, and papers that I read from time to time. I’ll probably post a list as a separate post.

## Engineering

### Understand Peter principle

The key to management is to get rid of the managers.
— Ricardo Semler

Laurence Peter came up with the concept that a person in a role keeps getting promoted based on their performance in their current role and not on the abilities that the role demands. It is quite possible to have a manager who doesn’t know how to do his job well i.e. he’s risen to his level of incompetence. This principle is important to understand as an engineer and is something that should make one reflect on one’s current skillset - do you possess the abilities to be in the role you are currently in or do you need to skill up? Again, this goes back to my point on sharpening the saw and doing non-programming things that make you a better developer.

### Tech debt is evil

Simplicity is hard work. But, there’s a huge payoff. The person who has a genuinely simpler system - a system made out of genuinely simple parts, is going to be able to affect the greatest change with the least work. He’s going to kick your ass. He’s gonna spend more time simplifying things up front and in the long haul he’s gonna wipe the plate with you because he’ll have that ability to change things when you’re struggling to push elephants around.
— Rich Hickey

When Ward Cunningham came up with the tech debt metaphor, he was referring to writing code despite having a poor understanding of the requirements. As time passes, whatever little understanding there was of the requirement fades away and the code is taken for granted. The definition of tech debt has since then come to represent poorly written code that later nobody understands and it’s taken for granted - something Ward Cunningham disagrees with.

The lethal combination is poorly written code for badly understood requirement(s) and you’ll come across this very often in startup environments. This comes with some pretty nasty ramifications like team in-fighting, and politics. In worst cases, it can bring the development of new features to a grinding halt.

Avoiding tech debt has to be a top priority for any startup that wants to grow. A lot of it revolves around establishing some processes to convey requirements among teams and ensuring that the resulting system design is simple. Like the quote by Rich Hickey shows, it is hard work but it will pay off in the longer run.

However, logging doesn’t seem to receive the same level of attention; consequently, developers find it hard to know the ‘what, when, and how’ of logging.
— Colin Eberhardt

Please stop asking developers to SSH into machines to read the logs. Centralize your logs by using ELK or if you want to avoid the hassle of setting up ELK, use a third party service like Fluentd or similar. A good centralized logging strategy will not only save you the pain of SSH-ing into multiple servers and grep-ing, it will also let you search through them easily. In addition, aggregating logs from various servers helps you identify patterns that may emerge by checking what’s happening on multiple servers in a specific time range.

# Unit Testing with Specs

clojure.spec is a standard, expressive, powerful, and integrated system for specification and testing. It lets you define the shape of your data, and place contraints on it. Once the shape, and constraints are defined, clojure.spec can then generate sample data which you can use to test your functions. In this post I’ll walk you through how you can use clojure.spec in conjunction with other libraries to write unit tests.

## Motivation

As developers, we are accustomed to writing example-based tests - we provide a known input, look at the resulting output, and assert that it matches our expectations. Although there is nothing wrong with this approach, there are a few drawbacks:

1. It is expensive as it takes longer to complete.
2. It is easier to miss out on the corner cases.
3. It is more prone to pesticide paradox.[1]

In contrast, clojure.spec allows you to do generative, property-based testing. Generative testing allows you to specify what kind of data you are looking for. This is done by using generators. A generator is a declarative description of possible inputs to a function.[2] Property-based testing allows you to specify how your program is supposed to behave, given an input. A property is a high-level specification of behavior that should hold for a range of inputs.[3]

## Setup

### Creating an App

We’ll begin by creating an app using lein and defining the dependencies. So go ahead and execute the following to create your project:

Next we’ll add a few dependencies. cd into clj-spec and open project.clj. Add the following to your :dependencies

clojure.spec comes as a part of Clojure 1.9 which, as of writing, isn’t out yet. If you’re on Clojure 1.8, as I am, you can use clojure-future-spec which will give you the same APIs. circleci/bond is a stubbing library which we’ll use to stub IO, network calls, database calls, etc. cloverage is the tool we’ll use to see the coverage of our tests.

## Using clojure.spec

### Simple Specs

Fire up a REPL by executing lein repl and require the required namespaces ;)

spec will let us define the shape of our data, and constraints on it. gen will let us generate the sample data.

Let’s write a simple spec which we can use to generate integers.

We’ve defined a spec ::n which will constrain the sample data to only be integers. Notice the use of double colons to create a namespace-qualified symbol; this is a requirement of the spec library. Now let’s generate some sample data.

s/gen takes a spec as an input and returns a generator which will produce conforming data. gen/generate exercises this generator to return a single sample value. You can produce multiple values by using gen/sample:

We could have done the same thing more succinctly by using the in-built functions as follows:

### Spec-ing Maps

Let’s say we have a map which represents a person and looks like this:

Let’s spec this.

We’ve defined ::name to be a string, and ::age to be an integer (positive or negative). You can make your specs as strict or as lenient as you choose. Finally, we define ::person to be a map which requires the keys ::name and ::age, albiet without namespace-qualification. Let’s see this in action:

By now you must have a fair idea of how you can spec your data and have sample values generated that match those specs. Next we’ll look at how we can do property-based testing with specs.

## Using test.check

test.check allows us to do property-based testing. Property-based tests make statements about the output of your code based on the input, and these statements are verified for many different possible inputs.[4]

### A Simple Function

We’ll begin by testing the simple function even-or-odd. We know that for all even numbers we should get :even and for all odd numbers we should get :odd. Let’s express this as a property of the function. Begin by require-ing a couple more namespaces.

Now for the actual property.

We have a generator which will create a vector of 0s and 1s only. We pass that vector as an input to our function. Additionally, we know that the number of 0s should equal the number of :evens returned and that the number of 1s should equal the number of :odds returned.

Next, let’s test this property.

Awesome! We ran the test a 100 times and passed. The added benefit is that the input generated will be different every time you run the test.

### Using bond

bond is a library which will let you stub side-effecting functions like database calls. We’ll require the namespce and modify our code to save even numbers to database.

First, the namespace.

Next, the code.

Now let’s update the property and stub save.

Notice how we’re using bond/with-stub and telling it to stub save function which calls the database. Later, we assert that the number of times that the databse was called is equal to the number of evens in the vector. Let’s verify the property.

Voilà! It works!

The last part of this post is about finding out test coverage using cloverage. For that, we’ll be moving our code to core.clj and writing test under the test directory.

## Using cloverage

To see cloverage in action, we’ll need to add our functions to core.clj. Here’s what it’ll look like:

Update your clj-spec.core-test to the following:

Here we are using the defspec macro to run the same property-based test a 100 times only this time we’ll run the test via command-line using lein. Execute the following command to run the test and see the coverage.

This will make use of cloverage to run the tests. -t denotes our test namespace and -n denotes the namespace for whom the tests are written. You’ll get an output like this:

Perfect!! Now we know how much coverage we have. The HTML file has a nice graphical representation of which lines we’ve covered with our tests.

## Conclusion

This brings us to the end of the post on using clojure.spec to write generative, property-based tests in both the REPL and source files. Generative testing automates the task of having to come up with examples for your tests. Where to go from here? Each of these libraries is pretty powerful in itself and will provide you with the necessary tools to write powerful, robust, and expressive tests that require minimal effort. So, head over to the offical docs to learn more.

So far we’ve looked at monoids and functors. The next algebraic data structure we’ll cover is a monad. If you’ve wondered what a monad is but never really understood it, this is the post for you. I am sure that you’ve used it without realizing it. So let’s get to it.

## Definition

A monad has more structure than a functor. This means that you can call map on it and that it obeys all the functor laws. In addition, a monad has a flatMap function which you can use to chain monads together. In essence, monads represent units of computation that you can chain together and the result of this chaining is also a monad.

Let’s look at a few examples.

## Example

The above code[1] uses a for comprehension to muliply elements of the list together. Under the hood, this gets translated to:

The compiler is making use of the List monad to chain operations together. Let’s break this down.

This part of the code will return a List since that is what calling map on a List does. Since we have two elements in first list, the result of mapping will generate two lists of two elements each. This isn’t what we want. We want a single list that combines the results together.

The flattening of results is what flatMap does - it takes the two lists and squishes them into one.

For something to be a monad, it has to obey the monadic laws. There’s three monad laws:

1. Left identity
2. Right identity
3. Associativity

### Left Identity

This law means that if we take a value, put it into a monad, and then flatMap it with a function f, that’s the same as simply applying the function f to the original value. Let’s see this in code:

### Right Identity

This law means that if we take a monad, flatMap it, and within that flatMap we try to create a monad out of it, then that’s the same as original monad. Let’s see this in code:

Let’s walkthrough this. The function to flatMap gets the elements of the original list, List(1, 2, 3), one-by-one. The result is List(List(1), List(2), List(3)). This is then flattened to create List(1, 2, 3), which is the original list.

### Associativity

This law states that if we apply a chain of functions to our monad, that’s the same as the composition of all the functions. Let’s see this in code:

## Conclusion

This brings us to the end of the post on monads and their laws. List isn’t the only monad in your arsenal. Options and Futures are monads, too. I suggest going ahead and constructing examples for monadic laws for them.

# Functor

The next algebraic structure we’ll look at is a functor. In the introduction, we saw that a category consists of objects and arrows. As an example, we morphed a set of strings to another set which contained the reverse of those strings. In other words, we morphed an object to another object. What if we could morph an entire category to another category while preserving the structure? Well, that’s what a functor does.

## Formal Definition

Let and be categories. A functor is a map taking each -object to a -object and each -arrow to a arrow , such that all -objects and composable -arrows and

## Example

Say we have a set . From this set we create another set which contains finite lists of elements drawn from . The functor we want maps from set to set. Since we know that a category contains objects and arrows, becomes the object part. The arrow part takes a function to a function that given a list maps over elements of

How does this translate to code? This actually translates fairly easily to code. Containers like lists, trees, etc. that you can call map on are functors.

Let’s write some code. We’ll begin by creating a set .

Next, we’ll create .

Next, let’s create .

Next, we’ll create the function maplist.

Finally, let’s see this in action:

As we can see, maplist applied the function f on all elements of L. We did this by using the map method of a List instance.

## Functor Laws

All functors are expected to obey the two laws that we saw in the formal definition. Let’s see how they translate to code.

### First Law

The first law states that if we map the identity function over a functor, we’ll get back a functor which is the same as the original functor.

As we can see, applying identity to the list gives back the same list.

### Second Law

The second law states that if we map a functor using a composition of two functions, , it’s the same as first mapping the functor using the first function and then mapping the resulting functor using the second function, .

We’ll begin by creating two functions f and g.

Now let’s put the theory into practice.

As we see, the two lists are the same.

## More Functor Examples

### Example 1

Let’s consider a category where objects are integers. Arrows between objects indicates a “divided by” relationship. For example,

This indicates that 10 can be divided by 5. To reiterate, objects are numbers and arrows represent a “divided by” relationship.

Now let’s create a functor from the category to itself. This functor will multiply each object by 13. So, . Is this a valid functor? We have but is it true that ?

The answer is yes. Our category has arrows that indicate a “divided by” relationship. So, will be an integer. Similarly, will also be an integer and maintain a “divided by” relationship. This shows that arrows do not always have to be functions. They can also indicate a relationship between their domain and codomain.

## Conclusion

In this post we saw functors which map objects from one category to another. Containers like trees, lists, etc. are functors. All functors are required to obey the two functor laws.