Functional Programming Introduction in JavaScript

Introduction

I'm sure you've heard or read about this programming paradigm. In this post you will learn what functional programming is and see some examples that will make you think about how you code and maybe even encourage you to apply it in some project.

What's functional programming?

As we mentioned before, functional programming is a programming paradigm; a way of structuring your application using computation in a mathematical way, running away from practices that cause state changes and data mutation.

What?

Actually it's not that easy to understand it just in theory, especially if you're used to a specific code style, so let's see an example in practice:

This is the way we're used to coding by default (imperative way):

const numbers = [1, 2, 3, 4];
let doubled = [];

for (let i = 0; i < numbers.length; i++) {
  doubled.push(numbers[i] * 2);
}

console.log(doubled); //==> [2, 4, 6, 8]

This is how it's made using a declarative way (using functional programming):

const numbers = [1, 2, 3, 4];
const doubled = numbers.map(n => n * 2);

console.log(doubled); //==> [2, 4, 6, 8];

In both cases our goal was to get the value of each element in the array doubled. In the imperative approach we are using a counter to traverse the array and to know where the position of each element in the first array is. How many times has that darn counter caused a bug in our code? In the declarative approach we forget about the counter and focus on specifying what function we want to apply. In other words, when we use a declarative style, statements say WHAT to do by using a function, but not HOW to do it, in contrast to imperative programming. Interesting, right?

JavaScript FP tools

While some languages were created to be used following a functional approximation, JavaScript is not one of them. However we can apply some concepts, practices and libraries to use this paradigm by:

  • Ensuring the data we're using in our application is immutable.
  • Using pure functions.
  • Using currying.
  • Using function composition.

These concepts will be our toolbelt to use this programming paradigm in JavaScript. We're going to explain these concepts by showing you some libraries and examples.

In this post we're going to use some features from ECMAScript 6. If you're not familiar with this JavaScript version, we recommend going through a tutorial prior to reading this post.

Immutability

Let's see some examples before we present you the theory. What happens when I modify the copy of a variable?

let foo = 1;
let bar = foo;
bar += 1;

console.log(foo); // ==> 1

Ok. This foo value was not changed. That makes sense. Let's continue...

let foo = "foo";
let bar = foo;
bar += "bar";

console.log(foo); // ==> "foo"

Same as before, nothing strange...

let foo = [1, 2, 3];
let bar = foo;
bar.push(10000);

console.log(foo); // ==> [1, 2, 3, 10000]

Whoops! foo value was changed here! I'm sure you've suffered some bugs originating from mutations likes this. Usually our applications are more complex and in some sections the declaration of the variable is not seen with the naked eye. This is where a lot of nasty bugs happen in JavaScript. Using functional programming concepts, we avoid mutating our variables and work with immutable data. In order to apply this we use the next strategies:

  • We can reduce / eliminate unnecessary variable delarations. (In some functional languages the '=' operator does not exist)
  • We can use immutable data structures. If you need help in that area, there are libraries like Immutable.js created by Facebook and mori.
  • We can use some built-in functions like Object.freeze and Object.seal: Object.freeze makes an object immutable, if we try to change some of its properties we'll get an error unless we change a nested object property (if we want to freeze nested objects we can use some libraries like deep-freeze); Object.seal, on the other hand, prevents new properties from being added to an object, however we can still mutate the existing ones.
  • We can make use of ES6 Spread operator (...), this needs some extra work in some cases, but you don't depend on other libraries.
  • We can use some libraries that were built keeping functional paradigm in mind, like ramda or lodash/fp.

To illustrate this, in the previous example we could avoid mutating foo variable using the append function from Ramda:

import { append } from "ramda";
const foo = [1, 2, 3];
const bar = append(4, foo);

console.log(foo); // ==> [1, 2, 3]

Another option could be (using the spread operator approach):

const foo = [1, 2, 3];
const bar = [...foo, 4];

console.log(foo); // ==> [1, 2, 3]

Pure functions

A pure function is a function that only returns a value based on its arguments, so given the same input, it will always produce the same output, it's deterministic; it has referential transparency and produces no side effects, meaning it does not alter any external state (global or local).

An implementation of a pure function would look like this:

function double(n) {
  return n * 2;
}

This function receives an argument 'n' that is not mutated, and always returns the same result for that value 'n'.

There are some bad smells that can help you to pinpoint whether a function is impure:

  • It has no arguments.
  • It does not return any value.
  • It makes use of 'this'.
  • It uses global variables.

Here are some examples of impure functions:

// Impure: It returns void and it mutates the environment.
console.log("Hello");

// Impure: It has no arguments and produces a different result when called (undeterministic).
Math.random();

// Impure: It mutates the array as a side effect.
array.splice(2, 3);

Using pure functions instead of impure ones ensures our data is not changed by accident. Also, as you will appreciate more later on, it allows splitting our code into more reusable functions, easy to test and more readable.

... This sounds very interesting but rather theoretical. Can we see a simple case of pure functions in practice? Let's get into it. Suppose we need to create a function that adds the sum of all elements of an array as the last element, that means if we have an array with values [2, 3] it will return [2, 3, 5], ...That sounds easy, right? Searching implementations we can see that with reduce we can calculate the sum and with push we can add an element into an array.

function appendSumOfValues(entryArray) {
  const total = entryArray.reduce(
    (accummulator, currentValue) => accummulator + currentValue
  );
  entryArray.push(total);

  return entryArray;
}

Let's use this function:

const original = [3, 2];
console.log(appendSumOfValues(original));

So far so good, when we execute the code we get the expected result: [2, 3, 5].

Let's try one more thing:

const original = [3, 2];
console.log(appendSumOfValues(original));
console.log(appendSumOfValues(original));
console.log(appendSumOfValues(original));

What result do we expect?

[3, 2, 5][(3, 2, 5)][(3, 2, 5)];

But what result do we get?

[3, 2, 5][(3, 2, 5, 10)][(3, 2, 5, 10, 20)];

How?! What happened here? Array's push method has mutated the array we passed as an argument. When we invoke appendSumOfValue it mutates the original array (so this function is not pure). Imagine we use this function inside an application that calculates different discounts based on the shopping options :-).

I see, this might be dangerous, how can I make this function pure? We can use another array method to create a new array and add the element to that. There are different approximations, in this case we've chosen concat.

function appendSumOfValues(entryArray) {
  const total = entryArray.reduce(
    (accummulator, currentValue) => accummulator + currentValue
  );
  const result = entryArray.concat(total);

  return result;
}

Concat is an array method that does not mutate the imputs and always returns a new array. By using this approach we make appendSumOfValues pure and now we get the result we all expect.

High order functions

High order functions are functions that receive one or more functions as arguments or returns another function as result. They are quite useful because we can pass some behaviour as functions that will impact the final result. We'll focus on map, filter and reduce, the three main functional programming abstractions in JavaScript implemented as array methods.

  • map: Applies a transformation function over each element.
  • filter: Returns all elements that passed the test implemented by the provided function.
  • reduce: Merges an initial value with each element of the array.
[1, 2, 3].map(n => n + 1); // => [2, 3, 4]

[1, 2, 3].filter(n => n > 1); // => [2, 3]

[1, 2, 3].reduce((acc, n) => acc + n, 0);
// 0 + 1 -> 1
// 1 + 2 -> 3
// 3 + 3 -> 6
// 6

If you want to learn more about map, filter, reduce, some... check this post.

Talking about these functions it is worth mentioning the unary function. Let's see an example:

Let's transform an array of strings into an array of numbers. In order to do that we'll use the map method to apply a function to each element of the list) and the function parseInt (it parses a string into a number). The implementation looks like this:

const result = ["1", "2", "3"].map(item => parseInt(item));
console.log(result);

We can see the function is working correctly and prints the expected result:

[1, 2, 3];

If we have a function that only receives one argument (or, at least it seems to for parseInt), does it make sense to have the (item) => ... wrapper inside map? Could we just remove it? If we use parseInt without wrapping it in a lambda expression like this:

["1", "2", "3"].map(parseInt); // => [1, NaN, NaN]

// parseInt( number = currentValue, radix = index)
// parseInt(1, 0) => 1
// parseInt(2, 1) => NaN
// parseInt(3, 2) => NaN

the result we get is [1, NaN, NaN]. This happens because map implementation receives a function with the signature function(element, index, array) and parseInt is defined with the signature parseInt(number, radix) even if we usually call it with only one argument.

The unary function transforms a function into a unary one, which means a function that only receives one argument.

// Unary function
const unary = fn => {
  return arg => fn(arg);
};

Using the unary function we can use parseInt without an explicit lambda:

["1", "2", "3"].map(unary(parseInt)); // [1, 2, 3]

We don't have to implement our unary function, libraries like ramda or lodash/fp already implement that for us.

High order functions (HoF) are closely related to high order components from react (HoC): a HoC is a function that takes a component and returns a new component. A very well-known HoC is connect from react-redux (it receives a component and returns a new component that has access to the store):

import React from "react";
import { connect } from "react-redux";

// Component definition
class ProfileComponent extends Component {
  render() {
    const { username, avatar } = this.props.profile;
    //                                      ^^^^^^^ profile comes from the store
    return (
      <div className="profile">
        <div className="profile__name">{username}</div>
        <img className="profile__avatar" alt="avatar" src={avatar} />
      </div>
    );
  }
}

const mapStateToProps = state => ({
  userProfile: state.currentUser.profile
});

export const ProfileContainer = connect(mapStateToProps)(ProfileComponent);
//                                                      ^^^^^^^^^^^^^^^^^^ Component to connect
//                              ^^^^^^^^^^^^^^^^^^^^^^^^ HoC configuration

Currying

Currying is a technique that converts a function that receives multiple arguments into a sequence of unary functions.

That means, if we have a function with N arguments, it won't fully execute unless we pass all arguments, unlike the standard behavior in regular functions in JavaScript (as with parseInt, which we could run with only one argument despite receiving two).

With currying we can reuse a function in different places with different configurations.

Suppose we want to calculate the sum of two numbers:

const add = (a, b) => a + b;

Such as:

add(3, 5); // 8
add(3)(5); // TypeError

A curried version of add will look like this:

const add = a => b => a + b;

This was done using arrow function syntax from ES6 (it is more readable once you get used to it). The same version using regular function from ES5 would look like this:

function sum(a) {
  return function(b) {
    return a + b;
  };
}

What can we do with it now?

add(3)(5); // 8

const addThree = add(3); // (b) => 3 + b
addThree(5); // 8

What happens if we try to invoke it like add(3, 5)? It won't work as expected, it returns a new function instead of a number... What a shame, now I need to know if a function is a curried one or not and how to invoke it... Here's where libraries like ramda or lodash/fp shine; they already provide a function that converts our functions to curried ones:

Using curry function from Ramda:

import { curry } from "ramda";
const add = curry((a, b) => a + b);

Now we can invoke the function the way it best suits our needs:

add(3, 5); // 8
add(3)(5); // 8

By using this approach we can create some sort of preconfigured functions and use them later:

const minusOne = add(-1);
const addSixty = add(60);

// Now we can use them
minusOne(10); // 9
addSixty(10); // 70

These examples might seem silly to you, but the thing is: the technique is very useful to configure functions depending on the context. It is used specifically to implement the factory pattern and the template method pattern in JavaScript.

Could you show me a simple real world example? Let's go for it! Suppose we have a color picker composed by three sliders that updates the background color of a div element (each slider receives a number between 0 and 255).

How could this be implemented using React? Well, we could create a handler for each slider:

export const ColorPicker = props => {
  return (
    <div className="colorpicker">
      <ColorSliderComponent
        value={props.color.red}
        onValueUpdated={value =>
          props.onColorUpdated({
            red: value,
            green: props.color.green,
            blue: props.color.blue
          })
        }
      />
      <ColorSliderComponent
        value={props.color.green}
        onValueUpdated={value =>
          props.onColorUpdated({
            red: props.color.red,
            green: value,
            blue: props.color.blue
          })
        }
      />
      <ColorSliderComponent
        value={props.color.blue}
        onValueUpdated={value =>
          props.onColorUpdated({
            red: props.color.red,
            green: props.color.green,
            blue: value
          })
        }
      />
    </div>
  );
};

But...this is a code smell, isn't it? We're like repeating the same for each slider... Couldn't we pass the assigned color as argument to the handler for each slider? Then, the problem is that the handler signature does not match. However, if we use currying we can do this:

export const ColorPicker = props => {
  const onValueUpdated = color => value => {
    props.onColorUpdated({
      [color]: value,
      green: props.color.green,
      blue: props.color.blue
    });
  };

  return (
    <div className="colorpicker">
      <ColorSliderComponent
        value={props.color.red}
        onValueUpdated={props.onColorUpdated("red")}
      />
      <ColorSliderComponent
        value={props.color.green}
        onValueUpdated={props.onColorUpdated("green")}
      />
      <ColorSliderComponent
        value={props.color.blue}
        onValueUpdated={props.onColorUpdated("blue")}
      />
    </div>
  );
};

Composition

Once we have created our pure functions, we can chain them through composition. An efficient way of using composition would be, again, taking benefit of using Ramda. This library provides us with two functions: compose (it runs functions from right to left), and pipe (it runs functions from left to right). How does it work? It executes the functions passing the result of the executed function as the input argument for the next one.

Ok, you got me... Can I see a simple example? Let's implement a feature that, given an array of numbers, calculates the sum of all even numbers greater than 20.

We know we can exclude some elements using filter or calculate a sum of elements using reduce. If we apply these concepts without composition it would look like this:

const original = [80, 3, 14, 22, 30];

// Filter only even values
let aux = original.filter(value => value % 2 === 0);
// Filter only values greater than 20
aux = aux.filter(value => value > 20);
// Calculate the sum
const result = aux.reduce((accumulator, value) => accumulator + value);

console.log(result); // 132

It's weird creating temporary variables and reassigning them with next values in every step... Is there a way to get cleaner code? We can use function chaining:

const original = [80, 3, 14, 22, 30];

const result = original
  .filter(value => value % 2 === 0)
  .filter(value => value > 20)
  .reduce((accumulator, value) => accumulator + value);

console.log(result); // 132

Ok, this looks cleaner, but it's kind of cryptic seeing the logic in each function... Is there a way to use more readable functions? We could extract the logic into new functions, however, we lose the chaining pattern:

const original = [80, 3, 14, 22, 30];

const filterOnlyPairElements = values =>
  values.filter(value => value % 2 === 0);
const filterGreaterThan = values => max => values.filter(value => value > max);
const sumAllValues = values =>
  values.reduce((accumulator, value) => accumulator + value);

const result = sumAllValues(
  filterGreaterThan(filterOnlyPairElements(original))(20)
);

console.log(result); // 132

This code seems less readable... Here comes Ramda to the rescue!.

We're going to use pipe, a function that allows us to chain functions, that is, the result from the previous one is the argument for the next one:

const original = [80, 3, 14, 22, 30];

const result = R.pipe(
  R.filter(value => value % 2 === 0),
  R.filter(value => value > 20),
  R.sum
)(original);

console.log(result); // 132

Why do we use R.filter and R.sum now instead of ES6 building functions?

The key is all Ramda functions are curried by default. This way functions like filter can now receive one function (the predicate) and wait to be fed with the second argument later (the array).

Even so, it could be a bit complicated to read for newcomers... Is there a way to make it more semantic? Here we've taken a step back:

const original = [80, 3, 14, 22, 30];

const filterOnlyPairElements = values =>
  R.filter(value => value % 2 === 0, values);
const filterGreaterThan = R.curry((max, values) =>
  R.filter(value => value > max, values)
);
const sumAllValues = values => R.sum(values);

const result = R.pipe(
  filterOnlyPairElements,
  filterGreaterThan(20),
  sumAllValues
)(original);

console.log(result); // 132

Like in previous examples, we could also use filter and reduce from ES6 the way we want, leaving the code like this:

const original = [80, 3, 14, 22, 30];

const filterOnlyPairElements = values =>
  values.filter(value => value % 2 === 0);
const filterGreaterThan = R.curry((max, values) =>
  values.filter(value => value > max)
);
// Or const filterGreaterThan = (max) => (values) => values.filter((value) => value > max);
const sumAllValues = values =>
  values.reduce((accumulator, value) => accumulator + value);

const result = R.pipe(
  filterOnlyPairElements,
  filterGreaterThan(20),
  sumAllValues
)(original);

console.log(result); // 132

pipe function will take care of passing the arguments and results as we want, such as:

const result = R.pipe(
  callback1,
  callback2,
  callback3,
  callback4,
  ...
)param1, param2, ...);

But if we still don't want to install ramda or lodash/fp and we want to get the same result as before, we could use our own pipe implementation:

const original = [80, 3, 14, 22, 30];

const filterOnlyPairElements = values =>
  values.filter(value => value % 2 === 0);
const filterGreaterThan = max => values => values.filter(value => value > max);
const sumAllValues = values =>
  values.reduce((accumulator, value) => accumulator + value);

const pipe = (...callbacks) => array =>
  callbacks.reduce((previous, callback) => callback(previous), array);

const result = pipe(
  filterOnlyPairElements,
  filterGreaterThan(20),
  sumAllValues
)(original);

console.log(result); // 132

In lodash/fp the pipe function is called flow.

Wrapping up

We made a brief introduction to functional programming in JavaScript. This programming paradigm prevents us from creating side effects in some places of our application. The underlying concept is creating pure, simple and generic functions and use them with composition.

From this point you can experiment with existing libraries and dig into these main concepts of this paradigm. We wrap up by bringing to your attention a talk by Jim Fitzpatrick (video here) which this article is based on.

We hope that you found this article interesting.

About Basefactor

We are a team of Front End Developers. If you need training, coaching or consultancy services, don't hesitate to contact us.

Doers/

Location/

C/ Pintor Martínez Cubells 5 Málaga (Spain)

General enquiries/

info@lemoncode.net

+34 693 84 24 54

Copyright 2018 Basefactor. All Rights Reserved.