Digit Classification with TensorFlow and the MNIST Dataset


Machine learning has been growing by leaps and bounds in recent years, and with libraries like TensorFlow, it seems like almost anything is possible. One interesting application of neural networks is in classification of handwritten characters – in this case digits.

This article will go through the fundamentals of creating and using a specific kind of network in TensorFlow: a convolutional neural network. Convolutional neural networks are specialized networks used for image recognition, that perform much better than a vanilla deep neural network.


Before diving into this project, we will need to review some concepts.


TensorFlow is more than just a machine learning library, it is actually a library for creating distributed computation graphs, whose execution can be deferred until needed, and stored when not needed.

TensorFlow works by the creation of calculation graphs. These graphs are stored and executed later, within a “session”.

By storing neural network connection weights as matrices, TensorFlow can be used to create computation graphs which are effectively neural networks. This is the primary use of TensorFlow today, and how we’ll be using it in this article.

Convolutional Neural Networks

Convolutional neural networks are networks based on the physical qualities of the human eye. Information is received as a “block” of data, like an image, and filters are applied across the entire image, which transform the image and reveal features which can be used for classification. For instance, one filter might find round edges, which could indicate a five or a six. Other filters might find straight lines, indicating a one or a seven.

The weight of these filters are learned as the model receives data, and thus it gets better and better at predicting images, by getting better and better at coaxing features out using its filters.

There is much more than this to a convolutional neural network, but this will suffice for this article.

The Data

How do we get the data we’ll need to train this network? No problem; TensorFlow provides us some easy methods to fetch the MNIST dataset, a common machine learning dataset used to classify handwritten digits.

Simply import the input_data method from the TensorFlow MNIST tutorial namespace as below. You will need to reshape the data into a square of 28 by 28, since the original dataset is a flat list of 784 numbers per image.

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("/tmp/data")

test_imgs = mnist.test.images.reshape(-1, 28, 28, 1)
test_lbls = mnist.test.labels

train_imgs = mnist.train.images.reshape(-1, 28, 28, 1)
train_lbls = mnist.train.labels

The Network

So how might we build such a network? Where do we start? Well lucky for us, TensorFlow provides this functionality out of the box, so there’s no need to reinvent the wheel.

The first thing that must be defined are our input and output variables. For this, we’ll use placeholders.

X = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.int64, shape=(None), name="y")

Next, we need to define our initial filters. In order to avoid dying/exploding gradients, a truncated normal distribution is recommended for initialization. In our case, we will have two lists of filters for our two convolutional layers.

filters = tf.Variable(tf.truncated_normal((5,5,1,32), stddev=0.1))
filters_2 = tf.Variable(tf.truncated_normal((5,5,32,64), stddev=0.1))

Finally, we need to create our actual convolutional layers. This is done using TensorFlow’s tf.nn.conv2d method. We also use a name scope to keep things organized. Note the max pooling layers between convolutional layers. The max pool layers aggregate the image data from each filter using a predefined method, and are not trained. They simply help reduce the complexity of the data by squashing the many layers produced by our filters.

with tf.name_scope("dnn"):
    convolution = tf.nn.conv2d(X, filters, strides=[1,2,2,1], padding="SAME")
    max_pool = tf.nn.max_pool(convolution, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID")
    convolution_2 = tf.nn.conv2d(max_pool, filters_2, strides=[1,2,2,1], padding="SAME")
    max_pool_2 = tf.nn.max_pool(convolution_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID")
    flatten = tf.reshape(max_pool_2, [-1, 2 * 2 * 64])
    predict = fully_connected(flatten, 1024, scope="predict")
    keep_prob = tf.placeholder(tf.float32)
    dropout = tf.nn.dropout(predict, keep_prob)
    logits = fully_connected(dropout, n_outputs, scope="outputs", activation_fn=None)

Also note that before our prediction layer, we have to squash down the final max pool output to make predictions at our fully connected layer. You can get the shapes of the various layers as shown below, to figure out what size your various layers need to be.

print("conv", convolution.get_shape())
print("max", max_pool.get_shape())
print("conv2", convolution_2.get_shape())
print("max2", max_pool_2.get_shape())
print("flat", flatten.get_shape())
print("predict", predict.get_shape())
print("dropout", dropout.get_shape())
print("logits", logits.get_shape())
print("logits guess", logits_guess.get_shape())
print("correct", correct.get_shape())
print("accuracy", accuracy.get_shape())

We also apply dropout to avoid overfitting, and do not apply an activation function to our outputs. We will instead calculate the entropy manually at each training step, which improves performance.

Now to create our training and evaluation layers. We will also namespace these like the previous layers, to make things easier to understand when they are viewed in a visualization tool like TensorBoard.

Our loss is the average of the cross-entropy between the expected outputs and the output of our logits, this much should make sense.

For training, we use an Adam optimizer, which is almost always recommended. The learning rate used in this article is 1e-4. This is the same learning rate that is used for TensorFlow’s own “expert” tutorial on MNIST.

Our evaluation is a little more complicated. Since we are training with batches, we need to get the output for each item in the batch. We do this by applying tf.argmax to every output list using tf.map_fn. Then, we compare the guesses to the actual values using tf.equal. Our accuracy is the average number of correct predictions (i.e., the percentage of numbers we classified correctly).

with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")
with tf.name_scope("train"):
    optimizer = tf.train.AdamOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)

with tf.name_scope("eval"):
    logits_guess = tf.cast(tf.map_fn(tf.argmax, logits, dtype=tf.int64), tf.int64)
    correct = tf.equal(logits_guess, y)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()

To actually train the network, we will need to run through the data several times, running a batch at every iteration. In this case, we will aim for 20,000 iterations. To calculate how many epochs we will need for our batch size, we use the following code.

keep_prob_num = 0.5
batch_size = 50
goal_iterations = 20000
iterations = mnist.train.num_examples // batch_size
epochs = int(goal_iterations / iterations) # so that total iterations ends up being around goal_iterations

Now to actually run the training operation on our graph.

with tf.Session() as sess:
    for i in range(epochs):
        for iteration in range(iterations):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            X_batch_shaped = X_batch.reshape(X_batch.shape[0], 28, 28, 1)
            sess.run(training_op, feed_dict = {X: X_batch_shaped, y: y_batch, keep_prob: keep_prob_num})
            print("iteration:", iteration)

It’s also recommended that you save the model and evaluate the accuracy at every epoch. You can accomplish this with the following code.


accuracy_val = sess.run(accuracy, feed_dict = {X: train_imgs, y: train_lbls,  keep_prob: 1.0})
print("accuracy:", accuracy_val)


saver = tf.train.Saver()
saver.save(sess, save_path)

After running this model through all epochs and iterations, your accuracy should be around 99.2%. Let’s check that.

with tf.Session() as sess:
    saver.restore(sess, save_path) #assume you've saved model, but could run in same session immediately after training
    accuracy_val = sess.run(accuracy, feed_dict = {X: test_imgs, y: test_lbls,  keep_prob: 1.0}) # test accuracy
    t_accuracy_val = sess.run(accuracy, feed_dict = {X: train_imgs, y: train_lbls,  keep_prob: 1.0}) # training accuracy
    print("accuracy:", accuracy_val)
    print("train accuracy:", t_accuracy_val)

Of course, in the above, the test accuracy is what’s most important, as we want our model to generalize to new data.


There are several steps you can take to improve on this model. One step is to apply affine transformations to the images, creating additional images similar but slightly different than the originals. This helps account for handwriting with various “tilts” and other tendencies.

You can also train several of the same network, and have them make the final prediction together, averaging the predictions or choosing the prediction with the highest confidence.


TensorFlow makes digit classification easier than ever. Machine learning is no longer the domain of specialists, but rather should be a tool in the belt of every programmer, to help solve complex optimization, classification, and regression problems for which there is no obvious or cost-effective solution, and for programs which must respond to new information. Machine learning is the way of the future for many problems, and as has been said in another blogger’s post: it’s unreasonably effective.

Design Patterns in JavaScript — Revisited


My original post on this subject did not dive deep into true “design patterns” but rather on basic inheritance in JavaScript. Since inheritance can be done in multiple ways in JavaScript, how you choose to inherit is itself a design pattern.

This particular article will look into implementing common OOP design patterns in JavaScript, without violating the principles of those designs. Many online examples of design patterns in JavaScript violate these principles. For instance, many versions of the Singleton pattern are not inheritable, which defeats the purpose of the Singleton. And oftentimes, you can also create instances of them. This article assumes you already know these patterns and why they are used, and simply want to see their implementation in JavaScript.

JavaScript is an interesting language. It is itself based on a design pattern: prototypes. Prototypes are speed-conserving but memory-using instances of objects that define initial values for much of the object definition. This is exactly analogous to setting prototype properties on a new object in JavaScript.

Now this naturally leads to some limitations, which can become readily apparent in the implementations below. If you’d like to contribute to a library that tries to escape some of these limitations, you can contribute to ClassJS on GitHub. (Disclaimer: It’s my project).

I suggest you run all of these examples online as you read through.

Anyways, let’s get down to business. Here are the 10 common design patterns we will go over:

  1. Singleton
  2. Abstract Factory
  3. Object Pool
  4. Adapter
  5. Bridge
  6. Decorator
  7. Chain of responsibility
  8. Memento
  9. Observer
  10. Visitor


Now, it’s easy enough to get something in JavaScript that looks like a Singleton. But I’ll show you how to write something that actually behaves like one. I.e.:

  1. You cannot instantiate it
  2. It holds one single read-only instance of itself
  3. You can inherit from it

Nearly all implementations you’ll find miss out on one of these points, especially 1 or 3. To accomplish these in JavaScript, you’ll need to hide the instance in a closure, and throw an exception when the constructor is called from external code. Here’s how that works:

var Singleton = (function(){
	var current = {};
	function Singleton(){
   		if(this.caller !== this.getCurrent && this.caller !== this.copyPrototype){
        	throw 'Cannot instantiate singleton';
  Singleton.prototype.sayHello = function(){

	Singleton.getCurrent = function(){
  		// current is dictionary of type to instance
      // new this() creates new instance of calling prototype
    	current[this] = (current[this] || new this());
      return current[this];

	// we have to relax the rules here a bit to allow inheritance
  // without modifying the original protoype
	Singleton.prototype.copyPrototype = function(){
    	return new this.constructor();

	return Singleton;

function SpecialSingleton(){
  // ensure calling from accepted methods

// copy prototype for inheritance
SpecialSingleton.prototype = Singleton.getCurrent().copyPrototype();

SpecialSingleton.getCurrent = Singleton.getCurrent;

SpecialSingleton.prototype.sayHelloAgain = function(){
	console.log('Hi again');

var singleton = SpecialSingleton.getCurrent();
// base class method
// derived method

// throws error
var special = new SpecialSingleton();

Notice that we also define a copyPrototype method. This is necessary so that the shared prototype does not get altered when we create other sub-classes. We could also serialize and deserialize the prototype with a special JSON reviver that handles functions, but that would make the explanation harder to follow.

Abstract Factory

A closely related pattern of course, is the Abstract Factory, which itself is generally a Singleton. One thing to note here is that we do not enforce the type that is returned. This is enforced at runtime as an error will be thrown if you call a method that does not exist.

...//  Singleton from above

// create base prototype to make instances of
function Thing(){

Thing.prototype.doSomething = function(){
    console.log('doing something!');

// create derived prototype to make instances of
function OtherThing(){

// inherit thing prototype
OtherThing.prototype = new Thing();
// override doSomething method
OtherThing.prototype.doSomething = function(){
    console.log('doing another thing!');

function ThingFactory(){

ThingFactory.prototype = Singleton.getCurrent().copyPrototype();

ThingFactory.getCurrent = Singleton.getCurrent;

ThingFactory.prototype.makeThing = function(){
    return new Thing();

function OtherThingFactory(){

// need to use instance or prototype of original is overridden
OtherThingFactory.prototype = ThingFactory.getCurrent().copyPrototype();

OtherThingFactory.getCurrent = ThingFactory.getCurrent;

OtherThingFactory.prototype.makeThing = function(){
    return new OtherThing();

var things = [];
for(var i = 0; i < 10; ++i){
    var thing = ThingFactory.getCurrent().makeThing();

for(var i = 0; i < 10; ++i){
    var thing = OtherThingFactory.getCurrent().makeThing();

// logs 'doing something!' ten times, then 'doing something else!' ten times
things.forEach(function(thing){ thing.doSomething(); });

Object Pool

Our resource pool in this case is also a singleton. When a resource is requested and there are not enough to meet the demand, an exception is thrown back to the client, who is expected to then release a resource before calling again.

// ... singleton from first example

function Resource(){

Resource.prototype.doUsefulThing = function(){
	console.log('I\'m useful!');

var ResourcePool = (function(){
	var resources = [];
  var maxResources = Infinity;
  function ResourcePool(){
    // ensure calling from accepted methods

  // copy prototype for inheritance
  ResourcePool.prototype = Singleton.getCurrent().copyPrototype();

  ResourcePool.getCurrent = Singleton.getCurrent;

  ResourcePool.prototype.getResource = function(){
    if(resources.length >= maxResources){
    	throw 'Not enough resource to meet demand, please wait for a resource to be released';
    var resource = new Resource();
    return resource;

  ResourcePool.prototype.releaseResource = function(resource){
    resources = resources.filter(function(r){
    	return r !== resource;
  ResourcePool.prototype.setMaxResources = function(max){
  	maxResources = max;

  return ResourcePool;

function NeedsResources(){

NeedsResources.prototype.doThingThatRequiresResources = function(){
	var lastResource;
  for(var i = 0; i < 11; ++i){
         lastResource = ResourcePool.getCurrent().getResource();
    	// requested too many resources, let's release one and try again


var needsResources = new NeedsResources();


Our adapter is rather simple. We only interface with the modern door, but when we tell that door to open, it interfaces with the ancient door, without us having to understand the underlying implementation.

function AncientDoorway(){


AncientDoorway.prototype.boltSet = true;
AncientDoorway.prototype.counterWeightSet = true;
AncientDoorway.prototype.pulleyInactive = true;

AncientDoorway.prototype.removeBolt = function(){
	this.boltSet = false;

AncientDoorway.prototype.releaseCounterWeight = function(){
	this.counterWeightSet = false;

AncientDoorway.prototype.engagePulley = function(){
	this.pulleyInactive = false;

function DoorwayAdapter(){
	this.ancientDoorway = new AncientDoorway();

DoorwayAdapter.prototype.open = function(){

DoorwayAdapter.prototype.isOpen = function(){
	return !(
  	this.ancientDoorway.boltSet || 
    this.ancientDoorway.counterWeightSet || 

var someDoor = new DoorwayAdapter();
// false
// uses ancient interface to open door
// true


Our bridge object delegates its responsibilities to some other class. The only thing it knows about this class is which methods it supports. At runtime, we can swap out various implementations, which have different behavior in the client class.

function BaseThing(){


BaseThing.prototype.MethodA = function(){};
BaseThing.prototype.MethodB = function(){};

// if you wanted this to be truly private, you could check
// calling method, or wrap whole prototype definition in closure
BaseThing.prototype._helper = null;

BaseThing.prototype.setHelper = function(helper){
	if(!(helper instanceof BaseThingHelper)){
  	throw 'Invalid helper type';
	this._helper = helper;

function Thing(){


Thing.prototype = new BaseThing();

// delegate responsibility to owned object
Thing.prototype.methodA = function(){

Thing.prototype.methodB = function(){

function BaseThingHelper(){


BaseThingHelper.prototype.firstMethod = function(){};
BaseThingHelper.prototype.secondMethod = function(){};

function ThingHelper(){


ThingHelper.prototype = new BaseThingHelper();

ThingHelper.prototype.firstMethod = function(){
	console.log('calling first');
ThingHelper.prototype.secondMethod = function(){
	console.log('calling second');

function OtherThingHelper(){


OtherThingHelper.prototype = new BaseThingHelper();

OtherThingHelper.prototype.firstMethod = function(){
	console.log('calling other first');
OtherThingHelper.prototype.secondMethod = function(){
	console.log('calling other second');

var thing = new Thing();
// set helper for bridge to use
thing.setHelper(new ThingHelper());

// swap implementation
thing.setHelper(new OtherThingHelper());



Our decorator prototypes delegate responsibility to their base classes, while adding additional functionality. They are instantiated by passing an object of the same type for them to wrap. When the calls propagate all the way to the base class, the original wrapped object’s method is called.

// LCD prototype
function BaseThing(){}

BaseThing.prototype.doSomething = function(){};

// implementation (client code)
function Thing(){}

Thing.prototype = new BaseThing();
Thing.prototype.doSomething = function(){};

// wrapper classes for decoration
function ThingWrapper(wrappedObject){
	if(!(wrappedObject instanceof BaseThing)){
  	throw 'Invalid wrapped prototype type';
	this._wrappedObject = wrappedObject;

ThingWrapper._wrappedObject = null;
ThingWrapper.prototype = new Thing();
ThingWrapper.prototype.doSomething = function(){
	// delegate to wrapped class

function CoolThing(wrappedObject){
	ThingWrapper.call(this, wrappedObject);

CoolThing.prototype = new ThingWrapper();
CoolThing.prototype.doSomething = function(){
	console.log('doing something cool!');

function AwesomeThing(wrappedObject){
	ThingWrapper.call(this, wrappedObject);

AwesomeThing.prototype = new ThingWrapper();
AwesomeThing.prototype.doSomething = function(){
  console.log('doing something awesome!');

var wrappedThing = new AwesomeThing(new CoolThing(new Thing()));

var x = new ThingWrapper();

Chain of Responsibility

With chain of responsibility, various handlers are created for different events. Multiple handlers can handle multiple events, and multiple handlers may exist for the same event. All handlers keep a reference to the next handler, and handlers delegate their responsibility to the base class if they cannot handle an event. In this case, the base class will then ask the next handler to handle the event, and so on. The last handler handles all events, so we don’t have to worry about an event going nowhere and the cycle continuing forever.

var EventTypes = {
	Magic: 0,
	Cool: 1,
  Awesome: 2

function Handler(){}

Handler.prototype._nextHandler = null;

Handler.prototype.addHandler = function(handler){
	if(!(handler instanceof Handler)){
  	throw 'Invalid handler type';
  // if it already has a handler, append the handler to the next one
  // this process will propagate to the end of the chain
		this._nextHandler = handler;

// tell the next handler to try to handle the event
Handler.prototype.execute = function(eventType){

function CoolHandler(){}
CoolHandler.prototype = new Handler();
CoolHandler.prototype.execute = function(eventType){
	if(eventType !== EventTypes.Cool){
  	console.log('delegated uncool event');
        // tell the base handler to pass it to another handler
  	return Handler.prototype.execute.call(this, eventType);
  console.log('handled cool event');

function AwesomeHandler(){}
AwesomeHandler.prototype = new Handler();
AwesomeHandler.prototype.execute = function(eventType){
	if(eventType !== EventTypes.Awesome){
  	console.log('delegated non-awesome event');
  	return Handler.prototype.execute.call(this, eventType);
  console.log('handled awesome event');

function AnythingHandler(){}
AnythingHandler.prototype = new Handler();
AnythingHandler.prototype.execute = function(eventType){
  console.log('handled any event');

var root = new Handler();
root.addHandler(new CoolHandler());
root.addHandler(new AwesomeHandler());
root.addHandler(new AnythingHandler());



Memento’s can be very useful in JavaScript, such as when storing the application state in localStorage to be loaded when the session starts again.

In this case, we are simply saving a count variable, and restoring that count when we want. This causes the count to start all over again, before we call increment a few more times.

function Saveable(){
	this._count = 0;

Saveable.prototype.save = function(){
	return new SavedState(this._count);

Saveable.prototype.restore = function(savedState){
	this._count = savedState.getState();
  console.log('count reset to ' + String(this._count));

Saveable.prototype.increment = function(){

Saveable.prototype.logValue = function(){

function SavedState(count){
	this._count = count;

SavedState.prototype.getState = function(){
	return this._count;

// state manager holds reference to thing that can be saved, and acts on it
function StateManager(){
  this._saveable = new Saveable();

StateManager.prototype.getSavedState = function(){
	return this._saveable.save();

StateManager.prototype.setSavedState = function(savedState){

StateManager.prototype.increment = function(){

// logs 1,2,3
var stateManager = new StateManager();
for(var i = 0; i < 3; ++i){
// state is now 3
var memento = stateManager.getSavedState();
// logs 4,5,6
for(var i = 0; i < 3; ++i){
// state restored to 3
// logs 4,5,6 again
for(var i = 0; i < 3; ++i){


Observer is a competing pattern in JavaScript with pub/sub. Pub/sub is oftentimes somewhat easier to implement given the event-driven nature of JavaScript.

Use observer over pub/sub when you want the handlers and subjects to be more closely integrated, when your events flow in one direction, or when you want shared functionality in all observing or observed objects.

function Person(name){
	this._observers = [];

Person.prototype.name = '';

Person.prototype.setName = function(name){
	this.name = name;

Person.prototype.observe = function(observer){

function Observer(subject){
	this._subject = subject;

Observer.prototype.update = function(){};

function NameObserver(subject){
	Observer.call(this, subject);

NameObserver.prototype = new Observer();

NameObserver.prototype.update = function(){
	console.log('new name: ' + this._subject.name);

function NameLengthObserver(subject){
	Observer.call(this, subject);

NameLengthObserver.prototype = new Observer();

NameLengthObserver.prototype.update = function(){
	console.log('new length of name: ' + this._subject.name.length);

var person = new Person();
person.observe(new NameObserver(person));
person.observe(new NameLengthObserver(person));
// all observers all called for each change
// logs new name, then length of 8
// logs new name, then length of 9


The visitor pattern relies on polymorphism to cause correct handlers to be called. Since JavaScript does not have type-based method signatures, we instead create methods like so: ‘visit’ + elementTypeName, and call these on the visitor classes.

This also means that we need to check that the methods exist, and log or throw an exception when there is no valid handler; and that we need to store the type names of each prototype, since JavaScript provides no easy way to see the most-derived type.

This pattern allows us to handle each element in a list in a different way depending on its type, without having to add various method implementations to each one; and to handle each element in multiple ways depending on what visitor is visiting the element.

function Visitor(){}

Visitor.prototype.visit = function(element){
  if(!(('visit' + element.typeName) in this)){
  	return console.log('No handler for element of type ' + element.typeName);
  // redirect to type-specific visit method
  this[('visit' + element.typeName)](element);

function Element(){}

Element.prototype.typeName = 'Element';

Element.prototype.accept = function(visitor){

function CoolElement(){}

CoolElement.prototype = new Element();
CoolElement.prototype.typeName = 'CoolElement';

function AwesomeElement(){}

AwesomeElement.prototype = new Element();
AwesomeElement.prototype.typeName = 'AwesomeElement';

function CoolAwesomeVisitor(){}

CoolAwesomeVisitor.prototype = new Visitor();

// define type-specific visit methods to be called
CoolAwesomeVisitor.prototype.visitCoolElement = function(element){
	console.log('cool awesome visitor visiting cool element');

CoolAwesomeVisitor.prototype.visitAwesomeElement = function(element){
	console.log('cool awesome visitor visiting awesome element');

function AwesomeVisitor(){}

AwesomeVisitor.prototype = new Visitor();

AwesomeVisitor.prototype.visitAwesomeElement = function(element){
	console.log('awesome visitor visiting awesome element');

var visitors = [
	new CoolAwesomeVisitor(),
  new AwesomeVisitor()

var elements = [
	new CoolElement(),
  new AwesomeElement()



So that’s all for patterns now! I think this is the longest post I’ve ever written, and I intend to keep expanding on this as a good resource.

If you want to know more about these patterns in general, and what they’re used for, I highly recommend sourcemaking.


Oftentimes, you’ll see me wrap a class definition in a module like so:

var Class = (function(){
   var private = {};
   function Class(){}
   Class.prototype.setPrivate = function(value){
       private[this] = value;
   Class.prototype.getPrivate = function(value){
      return private[this];

The reason for this is fairly intuitive. In JS, you have to choose between two things: inheritance, and data hiding. You can’t have something akin to a private variable inherited by sub-classes. I’ll show you two common patterns that illustrate this.

function Class(){
    var private;
    this.setPrivate = function(value){
        private = value;
    this.getPrivate = function(value){
        return private;

Well… the variable is private. However, those getters and setters won’t be inherited, because they’re not on the prototype. You can technically solve this by calling the parent constructor with the sub-class as the caller, but I prefer the pattern I use.

The other possibility is this:

function Class(){}

Class._private = null;
this.setPrivate = function(value){
    this._private = value;
this.getPrivate = function(value){
    return this._private;

This is a little better is some ways, and worse in others. Our data is no longer hidden, and we’re relying on the developer, and naming convention to deter programmers from accessing it. The properties will be inherited from the prototype, however.

Because of the reasons above, I tend to use the first pattern as a best practice, but depending on the situation any one of these may work fine.

CSS Stacking Contexts


Today we’ll be learning about a lesser-known feature of CSS: Stacking contexts.

You may have been working on a project before and been surprised when you set the z-index for an element and it refused to move forward, remaining behind some other stubborn element.

There’s a reason for this behavior, and that reason is stacking contexts.

Stacking Context

A stacking context is essentially a group of elements whose z-index value determines their position within that group. If two elements do not share a stacking context, then they will ignore each other’s z-index values.

In this case, the stacking order is based on their relative order in the DOM (See image under “Creating a Stack”).

Creating a Stack

All of the common stacking context types.
All of the common stacking context types. Order is relative, fixed, absolute, opacity, transform.

A stacking context is created in the following cases:

  • The root stacking context (html element)
  • Absolute or relative position with a set z-index
  • Fixed position
  • Opacity less than 1
  • A set transform
  • A few other less common instances

I’ll be covering only the common instances that developers will normally encounter.

The Root Stacking Context

This case is pretty clear. Initially, all elements are part of a single stacking context under the DOM, meaning that their relative position on the z axis is determined entirely by their z-index property. If no z-index is set, their order is determine by the order in which they appear in the DOM (See image under “Creating a Stack”).

Absolute or Relative Position With a Set Z-Index

This case is the second-most common. This is almost always intentional, but occasionally, developers may try to position an element in another stacking context over some absolutely positioned element and find that it’s not possible.

Fixed Position

Another common case, but one that can be confusing. Most but not all browsers have this behavior now. Fixed position elements create their own stacking context, which without a z-index normally places it behind the document root. This can create a case of disappearing elements.

Opacity less than 1

This is a rare case, but one that everyone should be aware of. If you’re going to set opacity, then you have to know the consequence will be a new stacking context. If all you want is a translucent element, it will be more predictable if you simple set an rgba background with an alpha less than 1.

The reason for this is clear: If it did not create a new stacking context, what elements would show through the transparent element?

A Set Transform

This is a case which is more and more common lately, as CSS transforms become the norm. This often throws people off, as we assume when we scale an element it should retain its position in the flow of the document. The new stacking context can cause a transformed element to hide menus and other elements which would normally appear in front.

How Stacking Contexts Interact

Of course, the most important thing is how to apply this knowledge to create layouts and fix problems in the real world. For this reason, I’ve supplied some examples of how stacking contexts interact with each other. Most importantly, how do their children determine their z-positioning relative to other stacking context’s children?

Well, using the example from “Creating a Stack”, here’s what happens:

Z-Index Set

Z-Index on Relative Element's Children
Z-Index on Relative Element’s Children

If we set the z-index of the child elements, the result is the same as our original elements.

Z-Index Positive, Position Relative

Z-Index on Relative Element's Children - Children Are Relatively Positioned
Z-Index on Relative Element’s Children – Children Are Relatively Positioned

If we set the z-index of the child elements to a positive value, but additionally set the children’s positions to relative (creating a new stacking context for each child), then they will position completely independently of their parent, moving out in front of the other elements.

Z-Index Negative, Position Relative

Negative Z-Index on Relative Element's Children - Children Are Relatively Positioned
Negative Z-Index on Relative Element’s Children – Children Are Relatively Positioned

If we set the z-index of the child elements to a negative value, but additionally set the position to relative (creating a new stacking context for each child), then they will position completely independently of their parent, moving behind the other elements.

Z-Index Greater Than Other Stacking Context’s Children

Relative and Fixed Element with a Set Z-Index, Children With Set Z-Index Values
Relative and Fixed Element with a Set Z-Index, Children With Set Z-Index Values

In this case, we have given both the relative element, and the fixed element a z-index. The z-indices of their children do not interact, so even though the relative children are positioned ahead of the fixed children, they do not appear that way. The children are each in separate stacking contexts, though their parents share the same stacking context.


Stacking contexts are groups of elements whose z-index values position them along the y axis relative to each other. If an element is the root of a stacking context, its children will ignore the z-index values of the children of other stacking contexts, even if they are larger than its own.

Stacking contexts are very important when creating layouts in CSS. A lack of understanding of stacking contexts can lead to difficulty implementing relatively simple UIs, and in fixing bugs which arise commonly in today’s UIs. Stacking contexts are very commonly created when showing things like menus, popups, windows, etc. These types of UI controls are very common in web applications today, and therefore so is knowledge of stacking contexts and how they interact.

Representational State Transfer (REST)

What is REST?

REST is an architecture which describes a system that transfers generally non-static content between a client and server. This content is called a resource, and is always some uniquely identifiable “thing”. RESTful services implemented on top of HTTP are a popular solution for web applications today.


REST is representational in the sense that every request must uniquely identify a resource. A resource is something which is uniquely identifiable. The meaning of uniquely identifiable can essentially be defined by the system, and is dependent on the level of granularity at which the system works.

For instance, on one system, perhaps a hospital is a unique resource, but on another system, each of the hospital’s buildings are considered independently, so you could not for instance request the completion date of the hospital, but only of a specific hospital building.


A RESTful service always returns stateful data. That is, it is returning the current or specified state of the specific resource, which is not necessarily and not usually composed of static data.

For instance, suppose that our hospital added a new wing. The resource representing the hospital, if it is a correct stateful representation, would then reflect this new building. Any request made prior to the addition of the new wing would return it’s current state – without the new wing.

This should not be confused with the statelessness of the requests. A RESTful server maintains no data about the state of the client or its requests.


Transfer of course refers to the movement of data between a client and server. Data can flow both ways in a RESTful service, which usually supports the CRUD operations in addition to request types like HTTP OPTION, etc.

How did REST come about?

First, HTTP

The Hypertext Transfer Protocol, or HTTP, was the necessary precursor to what we consider a modern implementation of a RESTful service. HTTP provides a client-server architecture that focuses on text-based requests of documents. Because the text-based requests use URIs (unique resource identifiers), they are uniquely suited for use in a REST implementation, which is based on the concept of resources.

HTTP was originally proposed by Tim Berners-Lee, as a document storage and retrieval system between remote clients and servers. The original HTTP had only one method, GET, meaning that it would not be as suitable for a REST implementation as it is today.

HTTP soon added many methods, which made HTTP suited for a REST implementation. These methods included POST, PUT, and DELETE, which are used in today’s RESTful services to represent update, create, and delete operations respectively.

The Concept

The concept was coined in 2000 for a PhD dissertation by Roy Fielding. The concepts behind REST were used as the backbone for the URI standard used in HTTP requests. HTTP was therefore RESTful in its initial implementation (v 1.1). The difference between this and modern REST concepts is that the resource can be many more things than simply a static HTML document.

From this point, RESTful concepts were heavily adopted in the Web 2.0 age of asynchronous requests which loaded content in real-time into the browser. RESTful concepts enabled relatively simple and very consistent APIs to be created which abstracted this process heavily and eased implementation of complex applications handling asynchronous data requests.

What makes something RESTful?

There are five aspects necessary for a system to be considered RESTful, and one optional.

Client-Server Interactions

The architecture must start with a client-server model, where a single server hosts the unique resource, which a client may request.

Stateless Requests

This constraint means that the server cannot store session data from a client. Each request must include the session data necessary to execute the request.


Requests much be created in such a manner as to be identifiably cacheable or not. This allows an intermediate component to perform caching, without special system knowledge.

Layered Architecture

Each point of processing should not have awareness of other parts of the processing chain.

Uniform Interface

The system must define a consistent API, which decouples the requests from the implementation.

Transfer of Logic

An additional concept sometimes considered is the ability to transfer logic representations that can be executed on the client. This includes scripts, applets, etc. Many people are surprised to learn that this idea is part of the original PhD dissertation, and that implementation did not catch up to the possibility of the concepts for about a decade.

What is the purpose of REST?

The purpose of REST is to provide an architecture which creates sufficient abstraction in a large complex, distributed system of unique resources, so that a client-server model of resource access, alteration, and creation, can occur without significant complexity and overhead, even in a system of global scale such as the World Wide Web has become today.

How is REST Used?

REST today is the backbone of HTTP. All common HTTP requests are stateless, and conform to all the five criteria for a REST implementation.
However, the more common modern usage of REST today is in the implementation of a RESTful service on top of HTTP. These services are used to provide a layer of abstraction from data access and representation, so that client code can easily manipulate the resulting structures and compose requests to interact with this data.

Many times, the RESTful service is implemented as an API for simple external access, with an authentication scheme built on top of it. These heavily abstracted interfaces can allow several entirely tangent applications to consume and alter data in a manner consistent with their implementation, while maintaining a single distinct data source.
It also can allow for the distribution of the total request load across several servers, which simply have to be aware of the API implementation and a data source.


REST is an important architectural model that defines itself as a set of five or six restrictions on top of an “unbounded” architecture. REST is not any one implementation, or any one concept or use case. It’s a highly extensible architecture that drives the web as we know it today, but is independent of its implementation in HTTP.

Many descriptions of REST are overly academic or too specific to a single implementation. I hope that I’ve provided a good resource on the fundamental meaning and purpose of REST, independent of HTTP, as well as its use in HTTP and web services today.

Given that this is a complex topic, on which all information essentially traces itself back to that single dissertation, it’s possible that some information may be inaccurate, so please let me know if you find these types of mistakes and they will be corrected immediately.

The CSS Box Model


One of the most poorly understood components of a web application is the styling. Many of the developers I’ve worked with haven’t taken the time to learn the principles that CSS relies on — especially how the rules cascade — and how padding, margins, borders, and content create the layouts of a page.

The latter is called the “box model” and is what we’ll be looking at today.

The box model is composed of four parts:

  • Content
  • Padding
  • Borders and
  • Margins


CSS Box Model Content
CSS Box Model Content

In a block element, the content area is determined by:

  • The height and width, if set, otherwise
  • The height and width of its content

In an inline element (almost anything directly containing text) the content area is determined by:

  • The line-height and width, if set, otherwise
  • The height of a line (font size), and the width of its container

It’s important to note that inline element’s borders, padding, and margins will apply to each line that the content appears on.


CSS Box Model Padding
CSS Box Model Padding

Padding is the space between the border and the edge of the element’s content area. You can think of it as a margin between the border and content.

Background colors only apply to the content area, and padding space.


CSS Box Model Border
CSS Box Model Border

Borders begin immediately outside of the content area, which is essentially all you need to know. This is true for both inline and block elements.

Background colors apply to all space inside the border.


CSS Box Model Margin
CSS Box Model Margin

Margins begin just outside of the border, and determine the space between it and the elements around it.


The CSS box-sizing property can cause exceptions to the above rules. The two valid values for box-sizing are:

  • content-box
  • border-box


Our original CSS Box
Our original CSS Box

Content box is the default value for this property in CSS, and means that elements will behave as shown above. In other words, the content determines height and width, or if set explicitly, the height and width control the size of the content area.

This means that an element which has a width of 50px with 5px of padding, a 1px border, and 3px of margins would take up 50px + (5px + 1px + 3px) * 2 (two sides) = 68px of width.


The same box but using border-box
The same box but using border-box

In this case, the width and height, when set, control the size of the element including content, padding and borders. Only the margins are not included.

This means that an element which has a width of 50px with 5px of margins would take up 50px + 5px * 2 (two sides) = 60px of width, regardless of padding or borders.


This value is not supported is most browsers, but if/when supported the height and width would include both the content and padding.

Margin collapsing

Another important exception to these rules is margin collapsing. Margin collapsing means that instead of two element’s margin’s being “added” together, they simply lay on top of one another, and the larger margin is displayed.

Margin collapsing occurs in 3 basic cases:

  • Adjacent sibling elements
  • Parent whose first or last child’s margins collide with parent margin
  • Empty elements

Adjacent sibling elements

Adjacent Siblings Margin Collapse
Note how the center margin is 15px instead of 30px

If two tags are located one after another then their margins will collapse.

Parent with first/last child margin collision

This occurs when the top margin of a parent element “touches” the top margin of its first child, or when the bottom margin of a parent element “touches” the bottom margin of its last child.

In either of these cases, the child element margin is “pushed” outside of the parent element, and the larger of the two is what will be displayed.

Empty blocks

When a box’s top and bottom margins touch (because there is no content), then the margins will collapse. Meaning it will essentially only have one margin, which is the largest of the two.


The CSS box model is very simple — once you understand and apply it — knowing these fundamental rules of CSS layouts, as well as the gotchas that can occur, should help you make quick work of many common layout problems.

Design Patterns in JavaScript


In this article I will introduce you to some common design patterns in JavaScript, including patterns commonly seen in OOP languages, with some compare and contrast after each explanation when useful.


There’s a common thread in the programming community that JavaScript isn’t a real language. After all, it doesn’t appear to support inheritance, data protection, or any of the other things programmers are used to seeing in modern programming languages.

It’s my opinion that this is a fallacy. I believe that people are so used to the “way things are done” that college courses teach you, that they simply reject the “new way” without giving it a chance. JavaScript is that new way, and it’s capable of nearly all the things that OOP languages are.

I will show you how to do those things and more in this post.

Basic Inheritance

In JavaScript, the concept of inheritance is implemented through the idea of shared “prototypes”. A prototype is exactly what it says. This is our definition of our object and what we believe it should do.

The prototype is set via the prototype property of an object, as such:

function Thing(){

Thing.prototype.doSomething = function(){
  console.log('I\'m doing something!');

If you’re new to JavaScript, your first question might be why a function has properties in the first place. In JavaScript everything is an object — ’nuff said.

Now, what does this have to do with inheritance? Well, if we want object SubThing to do something that object Thing already does, then we should say that Thing is SubThing’s prototype, based on what we said above. This is how we do that:

function SubThing(){

SubThing.prototype = new Thing();

This is basic inheritance in JavaScript. It works because the new keyword calls the function definition that follows it, then returns an object with a hidden [[prototype]] property based on the visible prototype property that we set.

Creating Instances in JavaScript

Creating instances in JavaScript is done in much the same way as any other language, but someone coming from a more classic OOP language may not understand where exactly the object to instantiate is here.

Any function definition can be used with the new keyword in JavaScript to initialize an object. Using one of our function definitions above, that would look like this:

var subThing = new SubThing();

As stated above, what this really does is create an object with a hidden [[protoype]] property based on the prototype we defined (in this case inherited from Thing).

Accessing Properties of an Object

So far, we’ve explained what a prototype is, and how to define properties on this prototype, but we haven’t actually shown how these properties are eventually used.

Any property access, specified with the dot operator (obj.property) or dictionary syntax (obj[‘property’]), will begin by looking at the objects direct properties, and then begin crawling up the prototype chain.

This protoype chain is the hidden [[prototype]] property set on the object we instantiated when using the new keyword.

It is called a chain, because it will check the current object’s prototype, then any inherited prototypes, in the order they were inherited.

As an example suppose we redefined SubThing this way:

function SubThing(){

SubThing.prototype = new Thing();
SubThing.prototype.doSomethingElse = function(){
  console.log('I\'m doing something else!');

var subThing = new SubThing();

Calling subThing.doSomethingElse() will call the method above, as the instance has the hidden [[prototype]] property we talked about before, created using new.

What does calling subThing.doSomething() do? It first looks at subThing’s prototype. Finding no property with this name, it looks for inherited prototypes. It will find the inherited Thing protoype and call the appropriate method. This is the final step in JavaScript inheritance.

Of course, inheritance of properties is only one aspect of OOP. Following is methods for implementing other common patterns.

Instance Properties

This section only exists to distinguish between static and instance properties in JavaScript. Inheriting instance properties works in exactly the same way as inheriting methods. Anything placed on the prototype essentially becomes treated as a direct property of any instances.

Static Properties

To create a static property, you simply add a property to the function definition itself. This property will be the same for all instances. This works because everything, even functions, are objects in JavaScript.

function Thing(){

Thing.property = 'STATIC! :D';

console.log(Thing.property); // logs 'STATIC! :D'

Unlike some OOP languages, however, the property must always be accessed and set on the function (class) itself. It’s instances do not have access to the property.

Private Data

Private variables are implemented as variables which are initialized in the constructor of the function (class definition).

These variables are scoped to the constructor and can only be accessed within the constructor.

Methods which will access private data are created inside the same constructor.

function Thing(){
  var private = "private";
  // this refers to the instance in this case
  this.setPrivate = function(value){
    private = value;
  this.getPrivate = function(){
    return private;

var thing = new Thing(); // calls constructor
console.log(thing.getPrivate()); // logs "private"
console.log(thing.getPrivate()); // logs "exposed!";

Public Data

Public data is any data either declared as part of the prototype, or added to the instance during the constructor call.

function Thing(){
  this.public = "I'm public!";

var thing = new Thing();
console.log(thing.public); // Logs "I'm public!"

function Thing(){

Thing.prototype.public = "Me too!";

var thing = new Thing();
console.log(thing.public); // Logs "Me too!"

Bonus: Static Inheritance

Here’s something you can’t do in most classical OOP languages: static inheritance.

function Thing(){

Thing.static = "I'm static!";

function SubThing(){

SubThing.static = Thing.static;

It’s really that simple!

For methods, you can use the following pattern:

function Thing(){

Thing.static = "hi";

Thing.staticMethod = function(){
  // 'this' refers to the caller

Thing.staticMethod(); // logs "hi"

function SubThing(){

SubThing.static = "hi again!";

SubThing.staticMethod = Thing.staticMethod;

SubThing.staticMethod(); // logs "hi again!"

The reason this works is that “this” refers to the caller in JavaScript, so it uses the “static” property of the current caller, which is Thing, then SubThing, respectively.


Thanks for reading! Hope this helps anyone interested in writing maintainable code in JavaScript, and even more so that it helps people get rid of the mindset that JavaScript isn’t a “real” language and finally start treating it like one instead of writing spaghetti code!

I tried to write this all to work as-is, but please let me know if you find any errors.