The Real Way to Move Your WordPress Site to the Root Directory

Introduction

WordPress is a very popular blog and CMS system based on PHP, and can be a great and fast solution to many web site management and development issues for simple websites, blogs, and E-commerce solutions. By default, WordPress is usually installed in a wordpress directory, which many users move to the root web directory. However, this can result in a website with a URL like http://.../wordpress/..., which is generally not very appealing, especially if it is a production website for sales, or a popular blog.

If you look online, many how-to’s will tell you to run through all kinds of complicated URL rewriting schemes and to run many many commands just to move your WordPress installation, and God forbid you have any plugins installed. Luckily, there’s a very easy way to move your WordPress installation, so let’s get to that.

How To Do It

There are only a couple steps involved in moving your WordPress installation.

Change Your WordPress Location

First, go to the Settings tab on the left side of your WordPress admin page. Then, remove wordpress, or whatever the root directory name is from the URLs in the image below. This should leave you with only the URL of your website (and no trailing slash). Save these settings.

You may see an error, and your WordPress site is likely to go down until the next step.

Change WordPress Directory
Change WordPress Directory

Change the Root Directory of Your Apache Installation

Now you will need to edit your Apache configuration for enabled sites, and change the root directory for your web server. Open up a terminal window on your Linux installation, or SSH into the server. If you are using Windows, you can download a tool like PuTTy to do the SSH part. The config is usually located in /etc/apache2/sites-enabled, so you can run cd /etc/apache2/sites-enabled to get there.

Edit Config
Edit Config

Now, run nano 000-default.conf and change the following line DocumentRoot /var/www/html/ to DocumentRoot /var/www/html/wordpress/, or whatever your root installation directory is.

Change Root
Change Root

Run service apache2 restart to restart your web server, and voila, your WordPress site is now located at the root URL of your website.

Restart Apache
Restart Apache

Considerations

This method only works if you do not have any other websites being served from the root directory, and also mean you will not be able to serve files from the root directory (you’ll have to move them into the WordPress directory). If this is not a concern, and you only host a WordPress website, then you should definitely use this method, and it’s much easier than other methods out there.

Conclusion

This is a very simple and effective method to serve your WordPress installation from the root URL of your website, and can save you a lot of time over other methods. It can have drawbacks if you want to serve multiple sites, but if that isn’t your goal, then this is a great method that will quickly solve your problem.

Disabling State Auto-Update in OpenHAB 2

Intro

One major complication of using openHAB in an enterprise environment is that the event handling does not always play nice with custom bindings. There are times when you would like to control the state updates through your binding, instead of the state being immediately updated after a command is posted.

In openHAB 1 you can statically define your items with definition files, that can include an autoupdate="false" configuration, which will prevent this behavior. However, what if your items are not defined statically this way? Well, as it turns out you’ll have to go through a bit more work.

The Binding

The autoupdate feature is actually a binding that is added by default to the openHAB 2 installation. This blog should save you probably near a couple weeks trying to figure this out, as stack traces are basically useless through OSGi applications.

The actual binding responsible for this behavior it turns out, is the Eclipse SmartHome autoupdate binding. If you follow the link, you can see that it’s not clear how to disable this behavior. As it turns out, there are two ways of doing this.

Disabling Auto-Update

Method 1: Remove the binding

The easiest way to disable auto-update is to remove the binding entirely from openHAB. You can do this by opening your openHAB 2 installation (ssh openhab@localhost on Linux, openHAB/start.bat on Windows), then typing bundle:uninstall "Eclipse SmartHome AutoUpdate Binding".

This comes with a side-effect though, any binding which depends on this auto-update feature will stop working or have erratic behavior. This solution works best when you’re creating a custom installation that only uses your binding.

Method 2: Create a Config Provider

Another option is to implement your own AutoUpdateBindingConfigProvider and register it with OSGi. Looking at the default provider, you can see that it gets the auto-update property from the config files (which are no longer used in openHAB 2). You could create your own provider which simply returns false for all of your things/channels/items, and thus only prevents auto-update on those items. You just have to make sure to register it as a provider with OSGi, which you can do by copying this file into the OSGI-INF directory in your binding, and replacing AutoUpdateGenericBindingConfigProvider with your own BindingConfigProvider implementation; you’ll also have to remove the BindingConfigReader entry. Since the default implementation simply returns null, the setting returned by your new provider will be used to determine when auto-update is applied.

Conclusion

Documentation on auto-update for openHAB 2 is sparse, and the transition of openHAB to Eclipse SmartHome is still largely in progress, so some implementation details have escaped most developers. Auto-update is normally good practice, but can interfere with direct updates being pushed from the binding, such as when a sent command is rejected by the underlying device. Replacing the config provider with your own can significantly increase the control you have over your enterprise openHAB 2 application.

Creating a Custom UI for openHAB 2

Intro

OpenHAB is a home automation framework running on top of well known Java technologies like OSGi.

OpenHAB is developed in such a way that the average user could simply open up the prepackaged user interfaces and be able to start communicating with the devices in their home through available bindings. However, it’s often the case that openHAB is being used to develop an automation system for which configurations, settings, and other user interactions are being handled automatically, such as an internally developed application connecting devices in vehicles. In such a case, the default UIs are not appropriate for user interaction.

This article goes through the process of creating a custom UI which can be extended with all the desired behavior, and which does not include the undesirable behavior of the default UIs. Because openHAB is not well documented, this information can be very challenging to find anywhere else.

Creating the UI

Your UI file structure should be as follows. Everything should be under the org.eclipse.smarthome.ui.new package.

File Structure for openHAB UI
File Structure for openHAB UI

NewUIApp.java

NewUIApp.java has to take the httpService from OSGi and register its resources at a given location. This is done with the following code.

package org.eclipse.smarthome.ui.new.internal;

import org.osgi.service.component.ComponentContext;
import org.osgi.service.http.HttpService;
import org.osgi.service.http.NamespaceException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class NewUIApp {

    public static final String WEBAPP_ALIAS = "/newui"; // the root dir of our new ui
    private final Logger logger = LoggerFactory.getLogger(NewUIApp.class);

    protected HttpService httpService;

    protected void activate(ComponentContext componentContext) {
        try {
            // register our resources in the web directory
            httpService.registerResources(WEBAPP_ALIAS, "web", null);
            logger.info("Started New UI at " + WEBAPP_ALIAS);
        } catch (NamespaceException e) {
            logger.error("Error during servlet startup", e);
        }
    }

    protected void deactivate(ComponentContext componentContext) {
        httpService.unregister(WEBAPP_ALIAS);
        logger.info("Stopped New UI");
    }

    protected void setHttpService(HttpService httpService) {
        this.httpService = httpService;
    }

    protected void unsetHttpService(HttpService httpService) {
        this.httpService = null;
    }

}

MANIFEST.MF

The manifest file is used by OSGi to determine the name of the bundle and the bundle package requirements. That should look as follows.

Manifest-Version: 1.0
Bundle-Name: New UI
Bundle-Vendor: ExaminingEverything
Bundle-Version: 0.9.0.qualifier
Bundle-ManifestVersion: 2
Bundle-License: http://www.eclipse.org/legal/epl-v10.html
Import-Package: org.osgi.service.component,
 org.osgi.service.http,
 org.slf4j
Bundle-SymbolicName: org.eclipse.smarthome.ui.new;singleton:=true
Bundle-RequiredExecutionEnvironment: JavaSE-1.8
Service-Component: OSGI-INF/*.xml
Bundle-ClassPath: .

You’ll see how the bundle name comes into play later.

newuiapp.xml

This file tells OSGi what services our bundle provides. In this case, it simply provides the UI app Java class that we created, and it also includes a reference tag to get the HttpService injected at runtime.

<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" activate="activate" deactivate="deactivate" name="org.eclipse.smarthome.ui.new.internal.NewUIApp">
   <implementation class="org.eclipse.smarthome.ui.new.internal.NewUIApp"/>
   <reference bind="setHttpService" cardinality="1..1" interface="org.osgi.service.http.HttpService" name="HttpService" policy="static" unbind="unsetHttpService"/>
</scr:component>

Cardinality 1..1 refers to the fact that our request is satisfied by exactly one HttpService instance.

index.html

Index.html can contain whatever you like. This is there simply to show that the new UI is registered in openHAB. The next step for the UI, which this article does not cover, would be tying together the UI elements with API calls to the openHAB instance. You can look at the existing Paper UI implementation for an example of this.

Here’s a simple index.html page.

<!DOCTYPE HTML>
<html>
  <head>
    <title>New UI</title>
  </head>
  <body>
    <h1>NEW UI TEST!</h1>
  </body>
</html>

pom.xml

The pom.xml is used for building in Maven. This file will have to have all the necessary information to build the project as an eclipse plugin, and to create the correct output for an OSGi bundle so we can register it with openHAB later.

The POM should look like this. The reason for the giant POM is that all the parent POMs from the UI project POM all the way up to the smarthome library have to have their configurations included here for the build to work. Normally they would be included by their location relative to the UI POM, but we wont always want our POM located in a specific place.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>


  <artifactId>org.eclipse.smarthome.ui.new</artifactId>
  <groupId>org.eclipse.smarthome.ui</groupId>
  <version>0.9.0-SNAPSHOT</version>

  <name>New UI</name>
  <packaging>eclipse-plugin</packaging>

  <properties>
    <esh.java.version>1.8</esh.java.version>
    <maven.compiler.source>${esh.java.version}</maven.compiler.source>
    <maven.compiler.target>${esh.java.version}</maven.compiler.target>
    <maven.compiler.compilerVersion>${esh.java.version}</maven.compiler.compilerVersion>
    <tycho-version>1.0.0</tycho-version>
    <tycho-groupid>org.eclipse.tycho</tycho-groupid>
    <xtext-version>2.12.0</xtext-version>
    <karaf.version>4.0.3</karaf.version>
    <ds-annotations.version>1.2.8</ds-annotations.version>
    <jdt-annotations.version>2.1.0</jdt-annotations.version>
    <build.helper.maven.plugin.version>1.8</build.helper.maven.plugin.version>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>

<build>
    <plugins>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-source-plugin</artifactId>
        <version>${tycho-version}</version>
        <executions>
          <execution>
            <id>plugin-source</id>
            <goals>
              <goal>plugin-source</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-scr-plugin</artifactId>
        <executions>
          <execution>
            <id>generate-scr-scrdescriptor</id>
            <goals>
              <goal>scr</goal>
            </goals>
          </execution>
        </executions>
      </plugin>   
<plugin>
        <groupId>${tycho-groupid}</groupId>
        <artifactId>tycho-maven-plugin</artifactId>
        <version>${tycho-version}</version>
        <extensions>true</extensions>
      </plugin>
      <plugin>
        <groupId>${tycho-groupid}</groupId>
        <artifactId>target-platform-configuration</artifactId>
        <configuration>
          <environments>
            <environment>
              <os>linux</os>
              <ws>gtk</ws>
              <arch>x86</arch>
            </environment>
            <environment>
              <os>linux</os>
              <ws>gtk</ws>
              <arch>x86_64</arch>
            </environment>
            <environment>
              <os>win32</os>
              <ws>win32</ws>
              <arch>x86</arch>
            </environment>
            <environment>
              <os>win32</os>
              <ws>win32</ws>
              <arch>x86_64</arch>
            </environment>
            <environment>
              <os>macosx</os>
              <ws>cocoa</ws>
              <arch>x86_64</arch>
            </environment>
          </environments>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-clean-plugin</artifactId>
        <version>2.5</version>
        <configuration>
          <filesets>
            <fileset>
              <directory>${basedir}/xtend-gen</directory>
              <includes>
                <include>**</include>
              </includes>
              <excludes>
                <exclude>.gitignore</exclude>
              </excludes>
            </fileset>
            <fileset>
              <directory>${basedir}/src/main/generated-sources/xtend</directory>
              <includes>
                <include>**</include>
              </includes>
              <excludes>
                <exclude>.gitignore</exclude>
              </excludes>
            </fileset>
          </filesets>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
      </plugin>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
      </plugin>   
    </plugins>
<pluginManagement>
      <plugins>
        <plugin>
          <groupId>${tycho-groupid}</groupId>
          <artifactId>tycho-compiler-plugin</artifactId>
          <version>${tycho-version}</version>
          <configuration>
            <extraClasspathElements>
              <extraClasspathElement>
                <groupId>org.apache.felix</groupId>
                <artifactId>org.apache.felix.scr.ds-annotations</artifactId>
                <version>${ds-annotations.version}</version>
              </extraClasspathElement>
              <extraClasspathElement>
                <groupId>org.eclipse.jdt</groupId>
                <artifactId>org.eclipse.jdt.annotation</artifactId>
                <version>${jdt-annotations.version}</version>
              </extraClasspathElement>
            </extraClasspathElements>
            <compilerArgs>
              <arg>-err:+nullAnnot(org.eclipse.jdt.annotation.Nullable|org.eclipse.jdt.annotation.NonNull|org.eclipse.jdt.annotation.NonNullByDefault),+inheritNullAnnot</arg>
              <arg>-warn:+null,+inheritNullAnnot,+nullAnnotConflict,+nullUncheckedConversion,+nullAnnotRedundant,+nullDereference</arg>
            </compilerArgs>
          </configuration>          
        </plugin>
        <plugin>
          <groupId>${tycho-groupid}</groupId>
          <artifactId>target-platform-configuration</artifactId>
          <version>${tycho-version}</version>
          <configuration>
            <!--
            <resolver>p2</resolver>
            <ignoreTychoRepositories>true</ignoreTychoRepositories>
            -->
            <pomDependencies>consider</pomDependencies>
            <target>
              <artifact>
                <groupId>org.eclipse.smarthome</groupId>
                <artifactId>targetplatform</artifactId>
                <version>${project.version}</version>
                <classifier>smarthome</classifier>
              </artifact>
            </target>
          </configuration>
        </plugin>
        <plugin>
          <groupId>${tycho-groupid}</groupId>
          <artifactId>tycho-surefire-plugin</artifactId>
          <version>${tycho-version}</version>
          <configuration>
            <failIfNoTests>false</failIfNoTests>
          </configuration>
        </plugin>
        <plugin>
          <groupId>org.apache.felix</groupId>
          <artifactId>maven-scr-plugin</artifactId>
          <version>1.24.0</version>
          <configuration>
            <supportedProjectTypes>
              <supportedProjectType>eclipse-plugin</supportedProjectType>
            </supportedProjectTypes>
          </configuration>
        </plugin>
        <plugin>
          <groupId>org.codehaus.mojo</groupId>
          <artifactId>build-helper-maven-plugin</artifactId>
          <version>${build.helper.maven.plugin.version}</version>
          <executions>
            <execution>
              <id>add-source</id>
              <phase>generate-sources</phase>
              <goals>
                <goal>add-source</goal>
              </goals>
              <configuration>
                <sources>
                  <source>src/main/groovy</source>
                </sources>
              </configuration>
            </execution>
            <execution>
              <id>add-test-source</id>
              <phase>generate-test-sources</phase>
              <goals>
                <goal>add-test-source</goal>
              </goals>
              <configuration>
                <sources>
                  <source>src/test/groovy</source>
                </sources>
              </configuration>
            </execution>
          </executions>
        </plugin>
        <plugin>
          <groupId>org.apache.felix</groupId>
          <artifactId>maven-bundle-plugin</artifactId>
          <version>3.0.1</version>
          <extensions>true</extensions>
          <configuration>
            <supportedProjectTypes>
              <supportedProjectType>jar</supportedProjectType>
              <supportedProjectType>bundle</supportedProjectType>
              <supportedProjectType>eclipse-plugin</supportedProjectType>
            </supportedProjectTypes>
          </configuration>
        </plugin>
        <plugin>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>3.6.1</version>
          <configuration>
            <compilerId>groovy-eclipse-compiler</compilerId>
          </configuration>
          <executions>
            <execution>
              <goals>
                <goal>compile</goal>
              </goals>
            </execution>
          </executions>
          <dependencies>
            <dependency>
              <groupId>org.codehaus.groovy</groupId>
              <artifactId>groovy-eclipse-compiler</artifactId>
              <version>2.9.2-01</version>
            </dependency>
            <dependency>
              <groupId>org.codehaus.groovy</groupId>
              <artifactId>groovy-eclipse-batch</artifactId>
              <version>2.4.3-01</version>
            </dependency>
          </dependencies>
        </plugin>
        <plugin>
          <groupId>com.mycila</groupId>
          <artifactId>license-maven-plugin</artifactId>
          <version>3.0</version>
          <configuration>
            <basedir>${basedir}</basedir>
            <header>src/etc/header.txt</header>
            <quiet>false</quiet>
            <failIfMissing>true</failIfMissing>
            <strictCheck>true</strictCheck>
            <aggregate>true</aggregate>
            <useDefaultMapping>true</useDefaultMapping>
            <mapping>
              <xtend>JAVADOC_STYLE</xtend>
              <mwe2>JAVADOC_STYLE</mwe2>
            </mapping>
            <includes>
              <include>src/**/*.java</include>
              <include>src/**/*.groovy</include>
              <include>src/**/*.xtend</include>
              <include>src/**/*.mwe2</include>
              <include>bin/**/*.mwe2</include>
              <include>workflows/**/*.mwe2</include>
              <include>src/main/feature/feature.xml</include>
              <include>feature.xml</include>
              <include>OSGI-INF/*.xml</include>
            </includes>
            <excludes>
              <exclude>_*.java</exclude>
            </excludes>
            <useDefaultExcludes>true</useDefaultExcludes>
            <properties>
              <year>2017</year>
            </properties>
            <encoding>UTF-8</encoding>
          </configuration>
          <executions>
            <execution>
              <goals>
                <goal>check</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
        <plugin>
          <groupId>org.eclipse.xtend</groupId>
          <artifactId>xtend-maven-plugin</artifactId>
          <version>${xtext-version}</version>
          <executions>
            <execution>
              <goals>
                <goal>compile</goal>
                <goal>xtend-install-debug-info</goal>
                <goal>testCompile</goal>
                <goal>xtend-test-install-debug-info</goal>
              </goals>
              <configuration>
                <outputDirectory>${basedir}/xtend-gen</outputDirectory>
                <testOutputDirectory>${basedir}/xtend-gen</testOutputDirectory>
              </configuration>
            </execution>
          </executions>
        </plugin>
        <plugin>
          <groupId>${tycho-groupid}</groupId>
          <artifactId>tycho-versions-plugin</artifactId>
          <version>${tycho-version}</version>
        </plugin>
        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-clean-plugin</artifactId>
          <version>2.5</version>
        </plugin>
      </plugins>
    </pluginManagement>
  </build>

  <dependencies>
    <dependency>
      <groupId>org.apache.felix</groupId>
      <artifactId>org.apache.felix.scr.ds-annotations</artifactId>
      <version>${ds-annotations.version}</version>
      <optional>true</optional>
    </dependency>
  </dependencies> 

<profiles>
    <profile>
      <id>sign</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.eclipse.cbi.maven.plugins</groupId>
            <artifactId>eclipse-jarsigner-plugin</artifactId>
            <version>1.0.5</version>
            <executions>
              <execution>
                <id>sign</id>
                <phase>verify</phase>
                <goals>
                  <goal>sign</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    </profile>
    <profile>
      <id>QA</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.jacoco</groupId>
            <artifactId>jacoco-maven-plugin</artifactId>
            <version>0.7.4.201502262128</version>
            <configuration>
              <dataFile>${session.executionRootDirectory}/target/coverage.jacoco</dataFile>
              <destFile>${session.executionRootDirectory}/target/coverage.jacoco</destFile>
              <append>true</append>
              <excludes>
                <exclude>**/*Test.*</exclude>
              </excludes>
            </configuration>
            <executions>
              <execution>
                <id>default-prepare-agent</id>
                <goals>
                  <goal>prepare-agent</goal>
                </goals>
              </execution>
              <execution>
                <id>default-prepare-agent-integration</id>
                <goals>
                  <goal>prepare-agent-integration</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    </profile>
    <!-- We need this profile in order to set '-Xdoclint:none' as a project property which will be used later by maven-javadoc-plugin as an 'additionalparam' to be passed to the javadoc.exe. -->
    <!-- This option will be used only if the JDK version is 1.8 or higher. Earlier versions of javadoc.exe does not accept this option. -->
    <profile>
      <id>doclint-java8-disable</id>
      <activation>
        <jdk>[1.8,)</jdk>
      </activation>
      <properties>
        <javadoc.opts>-Xdoclint:none</javadoc.opts>
      </properties>
    </profile>
    <profile>
      <id>javadoc</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-javadoc-plugin</artifactId>
            <version>2.10.3</version>
            <executions>
              <execution>
                <id>aggregate</id>
                <goals>
                  <goal>aggregate-jar</goal>
                </goals>
              </execution>
            </executions>
            <configuration>
              <!-- 'javadoc.opts' project property is set by the 'doclint-java8-disable' profile. It is important to keep 'javadoc' profile declaration after the declaration of 'doclint-java8-disable' profile. -->
              <additionalparam>${javadoc.opts}</additionalparam>
              <excludePackageNames>*.internal.*,nl.*</excludePackageNames>
            </configuration>
          </plugin>
          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>build-helper-maven-plugin</artifactId>
            <version>${build.helper.maven.plugin.version}</version>
            <executions>
              <execution>
                <id>attach-artifacts</id>
                <phase>install</phase>
                <goals>
                  <goal>attach-artifact</goal>
                </goals>
                <configuration>
                  <artifacts>
                    <artifact>
                      <file>${project.build.outputDirectory}/${project.artifactId}-${project.version}.jar</file>
                      <type>jar</type>
                      <classifier>javadoc</classifier>
                    </artifact>
                  </artifacts>
                </configuration>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    </profile>
  </profiles> 

</project>

build.properties

This file contains the build info for your jar output. Specifically, it tells Maven what items to include in the build and where to output the class files.


output.. = target/classes/
bin.includes = META-INF/,\
               .,\
               OSGI-INF/,\
               web/index.html
source.. = src/main/java/

Adding the bundle to OpenHAB

Package UI

First, you need to run the mvn package command in your base directory to create a package that you can register as a bundle with OSGi. On Windows, you may need to download Maven and extract it, then add it to your PATH. For Linux systems, you can run apt-get install maven. Your output should look like this:

openHAB UI Build Output
openHAB UI Build Output

Start openHAB

Installing openHAB is relatively easy on both Linux distros and Windows. Just unzip the files provided by openHAB, and run the start script from your shell. For Linux-based systems you may also have to ssh into the openHAB instance by running ssh openhab@localhost. The default password is habopen.

Install the bundle

The bundle:install command expects a URL, so you will have to type your package file path like so: file://localhost/[root_dir]/org.eclipse.smarthome.ui.new/target/org.eclipse.smarthome.ui.new-SNAPSHOT-[version].jar. Essentially, you provide the full path to the package jar as a file URL to register your new UI bundle.

Install OpenHAB Bundle
Install OpenHAB Bundle

Next, you should call bundle:start "New UI" to start the bundle.

Start openHAB UI Bundle
Start openHAB UI Bundle

View the page

Because we did not also register a dashboard tile bundle for the front-end, you must navigate directly to the new UI. You can do this by navigating to http://localhost:8080/newui/index.html in your browser. The HttpService that we had injected into our UI by OSGi will then serve up the index page from the location that we registered (web directory).

View openHAB UI
View openHAB UI

Updating the bundle

From now on, when you build your package, you only need to call bundle:update "New UI" and it will fetch it from the previous location. You will need to install the new bundle if the version changes, however.

Conclusion

OpenHAB is a great home automation system, but is designed more to be user-friendly rather than developer-friendly. There are many small tips and tricks like this needed to turn openHAB into a commercial production automation system. Creating a UI for openHAB is very simple – once you understand OSGi and the openHAB framework. This guide should provide you a starting point to develop your own system on top of openHAB.

Notes

Dashboard tiles

If you want your UI to show up in the list of tiles when the application opens, you’ll also have to extend the org.openhab.ui.dashboard.DashboardTile class, and provide your implementation of it to OSGi as a bundle. You can see how this is done by looking at the existing implementation for Paper UI.

API calls

The actual API calls needed to create an openHAB UI are varied and too many to go over here. The default openHAB installation provides a UI which allows you to test the various APIs, so you can use this as a starting point. You can also look at Paper UI if you want to see an implementation of the API calls based on AngularJS v1.

Using JavaScript Web Workers for Asynchronous Tasks

Intro

JavaScript’s concurrency model until recently has been non-existent. Asynchronous calls like AJAX or event handlers are actually scheduled along a single execution thread, with each event waiting its turn. This is very useful in the browser, where you want to be absolutely sure that you are not manipulating the DOM in a haphazard fashion. UI developers don’t want to have to worry about locking resources, but would rather focus on the end result in the UI.

But what about when you do want to worry about concurrency? What about when you would really like to offload some long running process to your front-end, without having to interrupt the UI thread? What do you do when moving your code to the server and using some signaling method is inconvenient or completely impossible?

Lucky for us modern developers, JavaScript does now provide a solution. WebWorkers are genuinely concurrent thread objects that do not interrupt the UI thread. These objects create OS threads in the background, and can therefore take advantage of multiple cores and other hardware optimizations. This is what we’ll be looking at today.

Creating a Script

WebWorkers take a single JavaScript file and execute it in their own thread and execution context (i.e., self is not the window). This means that the first step is to write the script that you would like to run concurrently.

For this example, we will use a computationally intensive but simple task: number factorization. We’ll use a very basic algorithm since factoring numbers is not the point of this article.

Here is that script. You can call this worker.js.

var percentage = 0;
var factors = [];

// respond to messages from the main thread
self.onmessage = function(e) {
    percentage = 0;
    factors = [];
    findAllFactors(e.data);
};

// loop through all possible prime factors (we'll find non-prime factors as well)
function findAllFactors(num){
    // largest possible prime factor in square root of number
    var max = Math.sqrt(num);
    for(var i = 2; i < max; ++i){
        // our percent complete will be the number of numbers we've
        // checked out fo the total possible numbers
        percentage = Math.ceil((i / max) * 100);
        // this number divides num, so add it to the list
        if(num % i === 0){
            factors.push(i);
        }
        // this number either divides num, or we're done checking, so
        // signal the main thread with our status
        if(num % i === 0 || percentage === 100){
            // post back the list of factors, and our completion percentage
            self.postMessage({
                percentage: percentage,
                factors: factors
            });
        }
    }
}

This script simply takes a number (passed to the worker) and factors it by iterating through all the possible numbers. When it finds a number, it messages the main thread to let it know.

self in a web worker script refers to the worker itself. So by setting onmessage, we are telling the web worker to respond to any messages by factoring the given number.

postMessage does just the opposite. This method sends a message back to the main thread, allowing it to respond to any changes. We do this because the worker itself cannot access the DOM, and if it could, the DOM updates would not occur until after execution had completed, like any other script.

Creating the Main Thread Script

We’ll need a script to create the web worker, and respond to updates. This script will look like the script below.

(function(window, undefined){
    window.loadingIndicator = {
        // 720,720 is a highly composite number (lots of factors, perfect for this example)
        number: 720720 * 720720 * 720720,
        ui: { // our important DOM nodes
            loadingBar: null,
            button: null,
            factors: null
        },
        // our web worker instance
        worker: null,
        // call this from the page
        init: function(){
            this.bindEvents();
        },
        // bind document events
        bindEvents: function(){
            var self = this;
            document.addEventListener('DOMContentLoaded', function(){
                self.setUiElements();
                // bind click event for our button
                self.ui.button.addEventListener('click', self.buttonClicked.bind(self));
                // create the worker
                self.worker = self.createLoadingThread();
            });
        },
        // when we click the button, this adds a span, helping
        // demonstrate the 'non-blocking'ness of the worker
        buttonClicked: function(){
            var span = document.createElement('span');
            var br = document.createElement('br');
            span.innerHTML = 'You clicked me!';
            this.ui.button.parentNode.append(br);
            this.ui.button.parentNode.append(span);
        },
        // set our ui map to nodes
        setUiElements: function(){
            this.ui.loadingBar = document.getElementsByClassName('loading-bar-value')[0];
            this.ui.button = document.getElementsByTagName('button')[0];
            this.ui.factors = document.getElementById('factors');
        },
        // create our worker
        createLoadingThread: function(){
            var worker = new Worker('worker.js');
            worker.onmessage = this.respondToUpdate.bind(this);
            // send worker number to factor
            worker.postMessage(this.number);
            return worker;
        },
        //respond to messages from the worker (new factor found)
        respondToUpdate: function(event){
            // set width to percentage of completion
            this.ui.loadingBar.style.width = String(event.data.percentage) + '%';
            // loading completed
            if(event.data.percentage >= 100){
                this.completed(event);
            }
        },
        // clean up when worker is done
        completed: function(event){
            this.worker.terminate();
            // remove other elements except indicator
            this.ui.button.parentNode.removeChild(this.ui.button);
            // removing shifts entire list up, so we can just keep
            // removing the first element
            var spans = document.getElementsByTagName('span');                
            while(spans.length){
                spans[0].parentNode.removeChild(spans[0]);
            }
            var brs = document.getElementsByTagName('br');
            while(brs.length){
                brs[0].parentNode.removeChild(brs[0]);
            }
            this.ui.factors.innerHTML = 'Factors of ' + 
                this.number + ': ' + 
                event.data.factors.join(', ');
        }
    };
})(window, undefined);

Let’s look at the createLoadingThread method, since this is the most important thing going on here.

On the first line, it creates a web worker, passing it the name of the script we created earlier. Next, it sets the onmessage handler of the web worker to our respondToUpdate method. Don’t confuse this with onmessage in our worker.js file. This onmessage responds only to messages posted from the worker we create to the main thread. This respondToUpdate method in turn updates our loading indicator. When the task has completed, we call the completed method, which calls terminate on the worker.

Lastly, we post a single message to the worker that we created, passing it the number that we want to factor. This will trigger the onmessage handler of the worker, causing it to factor the number and begin posting messages back to our main thread.

Creating a Page

Of course, none of this can execute without a page to run in. The HTML for this example is below.

<!DOCTYPE HTML>
<html>
    <head>
        <style>
            .loading-bar-container {
                width: 200px;
                height: 25px;
                border: 1px solid lightgrey;
            }
            .loading-bar-value {
                background-color: green;
                height: 100%;
                width: 0%;
            }
            button {
                margin-top: 5px;
            }
        </style>
    </head>
    <body>
        <div class="loading-bar-container">
            <div class="loading-bar-value" />
        </div>
        <div id="factors"></div>
        <button>Click me, I still work!</button>
        <script src="main.js"></script>
        <script>
            window.loadingIndicator.init();
        </script>
    </body>
</html>

Putting it All Together

Great! So now we’ve got our worker script, and our script for the main thread. We also have a page to run all of this.

However, that’s not the end of it. In order to run this example without any error message, you will have to set this up under a running web server. Without the web server hosting the page, Chrome and other browsers may throw exceptions because there is no “host”.

Assuming the above is set up, here’s what the page will look like. To test the page, click on the button we created. You’ll notice that the UI is not being interrupted by our factorization. A message is appended for every button click, while the factorization algorithm runs in the background. The loading indicator continues to update as the factorization continues to run.

WebWorker Loading Bar
WebWorker Loading
WebWorker Finished
WebWorker Finished

Conclusion

WebWorkers provide a very simple but very useful API. They provide a clear benefit by allowing you to run computationally intensive tasks without blocking the UI. This leads to a much better user experience, and to much more interactive pages.

The factorization process above can be any long-running calculation that is slowing down your application. Rather than forcing the user to wait, simply put your long-running task into its own script, create a web worker, and respond to its messages until it is completed.

This is a much better user experience than having the UI blocked by some long calculation, so it’s well worth the relatively small effort required to make the change.

If long-running computations are significantly impacting your user’s UI experience, WebWorkers are a great solution, and relatively easy to implement.

Dependency Injection in .NET MVC 5 and WebApi

Intro

Dependency injection is a very popular concept today, usually as part of an MVC web application. Many new frameworks use DI by default, such as AngularJS and .NET Core, but what do those of us working on previous version of .NET MVC do? Well, there’s a solution.

MVC 5 provides many areas where we can inject our own behavior. This is part of the inversion of control philosophy that underlies many libraries. This means that to dependency inject our own services, all we need to do is create our own MVC controller factory and API controller factory, and come up with a method of registering our services.

This article will lead you through that process, along with full code examples.

Getting Started

Solution Structure

The structure should be as follows. Each item should be a project of the same name. The YourApp.Web project should be a .NET MVC web project, and all projects should output a DLL with the same name as the project. The remaining projects should be class libraries. Service should reference Interface, Web should reference Service and Windsor.

  • YourApp.Interface
  • YourApp.Service
  • YourApp.Web
  • YourApp.Windsor

Packages

The first thing you will need is to download a few packages. We will be using Castle.Windsor as our DI container library. Find this package on NuGet and add it to your project.

Castle Windsor NuGet
Castle Windsor NuGet

Structure

In order for DI to be effective, you will need a services “layer” that contains your business logic. This layer should be made of classes which handle your business logic. For the purpose of this example, we will use the interface and implementation below for a single injected service.

This file should go in YourApp.Interface.

namespace YourApp.Interface {
    public interface IDemoService {
        string SayHello();
    }
}

This file should go in YourApp.Service.

namespace YourApp.Service {
    public class DemoService : IDemoService {
        public string SayHello(){
            return "Hello";
        }
    }
}

Web Config

You will need to add this section to your web config.

<section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />

You will need to add this information under the castle section. Add one component tag for each service you will be using. You can set the lifestyle as transient for now.

<castle>
    <components>
      <component id="Demo" service="YourApp.Service.IDemoService, YourApp.Interface" type="YourApp.Service.DemoService, YourApp.Service" lifestyle="transient"></component>
    </components>
</castle>

Defining Dependencies

In order for DI to work, we must first define what our dependencies are and what they resolve to. The installer below has been created for this purpose.

Windsor “installers” register the interfaces and their implementations. In this case, we are registering all controllers, all API controllers, and anything in our app configuration file (services above).

The configuration for the IActionInvoker is a little more special. What we’re doing here is specifying our own custom implementation, as well as passing a parameter to the constructor (the parameter is the DI container). You’ll see why we have to do this later.

There are two methods. One registers for the current assembly, which we won’t be using. The other registers using a specified assembly, which we will pass from our web project. The second method is the one you should be concerned with. The first is just a default which implements the necessary interface.

using Castle.MicroKernel.Registration;
using Castle.MicroKernel.SubSystems.Configuration;
using Castle.Windsor;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Web.Http.Controllers;
using System.Web.Mvc;

namespace YourApp.Windsor {
    public class ServiceInstaller : IWindsorInstaller {
        public void Install(IWindsorContainer container, IConfigurationStore store) {
            container.Register(Classes.FromThisAssembly()
                   .BasedOn<IController>().LifestyleTransient());
            container.Register(Classes.FromThisAssembly()
                   .BasedOn<IHttpController>().LifestyleTransient());
            container.Register(Classes.FromThisAssembly()
                   .BasedOn<FilterAttribute>().LifestyleTransient());
            container.Register(Component.For<IActionInvoker>()
                   .ImplementedBy<WindsorActionInvoker>()
                   .DependsOn(
                      Dependency.OnValue("container", container)
                   ).LifestyleTransient());
            container.Install(Castle.Windsor.Installer.Configuration.FromAppConfig());
        }

        public void InstallFromAssembly(
           IWindsorContainer container, 
           IConfigurationStore store, 
           Assembly assembly
        ) {
            container.Register(Classes.FromAssembly(assembly)
                   .BasedOn<IController>().LifestyleTransient());
            container.Register(Classes.FromAssembly(assembly)
                   .BasedOn<IHttpController>().LifestyleTransient());
            container.Register(Classes.FromAssembly(assembly)
                   .BasedOn<FilterAttribute>().LifestyleTransient());
            container.Register(Component.For<IActionInvoker>()
                   .ImplementedBy<WindsorActionInvoker>()
                   .DependsOn(
                      Dependency.OnValue("container", container)
                   ).LifestyleTransient());
            container.Install(Castle.Windsor.Installer.Configuration.FromAppConfig());
        }
    }
}

Of course, this is just the base class. We’ll inherit from this to pass in our web assembly during installation.

The class below is the installer which will actually be called when the Windsor container is initialized.

using Castle.MicroKernel.Registration;
using Castle.MicroKernel.SubSystems.Configuration;
using Castle.Windsor;
using YourApp.Interface;
using YourApp.Windsor;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Web.Http.Controllers;
using System.Web.Mvc;

namespace YourApp.Web.Windsor {
    public class WebServiceInstaller : ServiceInstaller, IWindsorInstaller {
        new public void Install(IWindsorContainer container, IConfigurationStore store) {
            InstallFromAssembly(container, store, Assembly.GetExecutingAssembly());
        }
    }
}

Now we just need to call our installer from the application start method in global.asax.cs.

protected void Application_Start() {
    // install windsor (find our class which implements IWindsorInstaller, and calls the Install method)
    Container = new WindsorContainer();
    Container.Install(FromAssembly.This());
}

Replacing the Controller Factory

ASP.NET MVC uses a controller factory to create instances of each controller when the application receives a request. This is what we will be replacing. Our new factory will resolve the controller type using our DI container, which will inject the services into the constructor.

Creating the New Factory

The new factory is pretty simple. When a controller is done being used, we release it from our container. When a controller is created, we resolve the dependencies through our container, and continue on as usual, calling the default factory methods.

using Castle.MicroKernel;
using Castle.Windsor;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;

namespace YourApp.Windsor {
    public class WindsorControllerFactory : DefaultControllerFactory {
        private readonly IWindsorContainer _Container;

        public WindsorControllerFactory(IWindsorContainer container) {
            _Container = container;
        }

        public override void ReleaseController(IController controller) {
            _Container.Release(controller);  // The important part: release the component
        }

        protected override IController GetControllerInstance(
            RequestContext requestContext, 
            Type controllerType
        ) {
            if (controllerType == null) {
                throw new HttpException(404, 
                  string.Format(
                      "The controller for path '{0}' could not be found.", 
                      requestContext.HttpContext.Request.Path
                ));
            }

            Controller controller = (Controller)_Container.Resolve(controllerType);

            // new code
            if (controller != null) {
                // Don't worry about this yet. This will help us inject
                //dependencies into our action filters later
                controller.ActionInvoker = _Container.Resolve<IActionInvoker>();
            }

            return controller;
        }
    }
}

The above code follows the same process as the default controller factory that .NET provides, except that it uses the DI container to resolve the controller. The Resolve method is what injects our dependencies into the constructor.

Modify the Application Start

Now that we have this new controller factory, how will we use it? .NET MVC provides a point of extensibility for this, so we will add it there. Here is the code for this.

protected void Application_Start() {
    // install windsor
    Container = new WindsorContainer();
    Container.Install(FromAssembly.This());
    // use new controller factory
    ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(Container));
}

Replacing the Action Invoker

The action invoker is the class which determines how actions are called on a controller. This is the point of extensibility where we will dependency inject properties into the filters.

There is one issue however: by this point in the request lifecycle the action filter objects have already been created, so we can’t use the DI container Resolve method to inject the properties into the constructor.

To solve this problem, we will use the extension method below, which injects the services into an action filter for every matching public property.

using Castle.MicroKernel;
using Castle.MicroKernel.ComponentActivator;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;

namespace YourApp.Windsor {
    public static class WindsorExtension {
        public static void InjectProperties(this IKernel kernel, object target) {
            var type = target.GetType();
            var props = type.GetProperties(BindingFlags.Public | BindingFlags.Instance);
            foreach (var property in props) {
                if (property.CanWrite && kernel.HasComponent(property.PropertyType)) {
                    var value = kernel.Resolve(property.PropertyType);
                    try {
                        property.SetValue(target, value, null);
                    } catch (Exception ex) {
                        var message = string.Format(
                            @"Error setting property {0} on type {1}.
                            See inner exception for more information.", 
                            property.Name, type.FullName
                        );
                        throw new ComponentActivatorException(message, ex, null);
                    }
                }
            }
        }
    }
}

Creating the New Action Invoker

Now we can use this new extension method to loop through the already created action filters and inject their properties, continuing with the default invoke call after we’ve finished injecting our services.

using Castle.Windsor;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace YourApp.Windsor {
    public class WindsorActionInvoker : ControllerActionInvoker {
        readonly IWindsorContainer container;

        public WindsorActionInvoker(IWindsorContainer container) {
            this.container = container;
        }

        protected override ActionExecutedContext InvokeActionMethodWithFilters(
                ControllerContext controllerContext,
                IList filters,
                ActionDescriptor actionDescriptor,
                IDictionary<string, object> parameters) {
            foreach (IActionFilter actionFilter in filters) {
                container.Kernel.InjectProperties(actionFilter);
            }
            return base.InvokeActionMethodWithFilters(
                controllerContext, 
                filters, 
                actionDescriptor, 
                parameters
            );
        }

        protected override AuthorizationContext InvokeAuthorizationFilters(
           ControllerContext controllerContext, 
           IList filters, 
           ActionDescriptor actionDescriptor
        ) {
            foreach (IAuthorizationFilter authFilter in filters) {
                container.Kernel.InjectProperties(authFilter);
            }
            return base.InvokeAuthorizationFilters(
                controllerContext, 
                filters, 
                actionDescriptor
            );
        }
    }
}

Using the New Action Invoker

The controller factory has already been set up to use the new action invoker. That’s what the mysterious line was near the end of our controller factory. To recap, that was this:
controller.ActionInvoker = _Container.Resolve<IActionInvoker>();.
The _Container.Resolve call correctly calls our custom action invoker because we registered it in our installer at the beginning of the tutorial.

Replacing the WebApi Dependency Resolver

The .NET WebApi does not use the same concept of a controller factory (or at least all attempts to use it for DI have been unsuccessful). Instead, we will replace the default dependency resolver with our own.

Creating the New Dependency Resolver

Given that the dependency resolver for .NET WebApi is very similar to what any other DI container does, it is very easy to replace this with our own, using our Windsor DI container instead.

using Castle.MicroKernel;
using Castle.Windsor;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Http.Dependencies;

namespace YourApp.Windsor {
    public class WindsorDependencyResolver : System.Web.Http.Dependencies.IDependencyResolver {
        private readonly IWindsorContainer _container;

        public WindsorDependencyResolver(IWindsorContainer container) {
            if (container == null) {
                throw new ArgumentNullException("container");
            }

            _container = container;
        }
        public object GetService(Type t) {
            return _container.Kernel.HasComponent(t) ? _container.Resolve(t) : null;
        }

        public IEnumerable<object> GetServices(Type t) {
            return _container.ResolveAll(t).Cast<object>().ToArray();
        }

        public IDependencyScope BeginScope() {
            return new WindsorDependencyScope(_container);
        }

        public void Dispose() {

        }
    }
}

Updating the Application Start

To add our new resolver to the WebApi, we again find ourselves in the application start. here we can set our new resolver.

protected void Application_Start() {
    // install windsor
    Container = new WindsorContainer();
    Container.Install(FromAssembly.This());
    // resolve references for API controllers
    // adding a collection sub-resolver resolves things like List when you've only mapped Type.
    // thismay not be needed, but you should test your code with and without it to be sure
    Container.Kernel.Resolver.AddSubResolver(new CollectionResolver(Container.Kernel, true));
    // replace actual dependency resolver with our own
    var dependencyResolver = new WindsorDependencyResolver(Container);
    GlobalConfiguration.Configuration.DependencyResolver = dependencyResolver;
    // use new controller factory
    ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(Container));
}

Update the Application Code

Our solution is now ready for dependency injection, but our controllers and actions do not yet have properties to dependency inject into. We will show the updated structure of our controller and action filter classes.

Updating Controllers

Adding DI to our controllers is pretty easy. Our new controller factory will look for a constructor to inject the services into, matching any types with the services we specified. In this case, that’s our demo service. Our new controller constructor will look like this.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Web.Mvc;
using System.Web.Security;

using YourApp.Interface;

namespace YourApp.Web.Controllers {
    public class HomeController : BaseController {
        private readonly IDemoService _DemoService;

        public HomeController() {

        }

        public HomeController(IDemoService demoService) {
            _DemoService = demoService;
        }

        //... actions here
    }
}

You can now reference _DemoService in any of your action methods. You can also easily replace the implementation, or add new services to be injected into the constructor.

Updating Action Filters

If you remember, our action filters have already been instantiated by the time we can inject the properties, so we had to create an extension method to do this. This problem also rules out using constructor injection, so we will take a different approach: property injection. Here is how that looks for a simple filter.

using YourApp.Interface;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Web.Mvc;

namespace YourApp.Web
{
    public class LayoutFilterAttribute : ActionFilterAttribute {
        // this property is filled in with our service by our special IActionInvoker
        public IDemoService _DemoService { get; set; }

        public LayoutFilterAttribute()
        {
            
        }

        public override void OnActionExecuted(ActionExecutedContext filterContext)
        {
            // inject data into layout
            filterContext.Controller.ViewBag.message = _DemoService.SayHello();
        }
    }
}

Conclusion

So that’s all there is to it! It can be quite a lengthy process, but it’s well worth the results, especially if upgrading to .NET Core is not an option, but you’d like to have fine control over your DI process.

The code given here is highly re-usable. If done correctly, you should have a YourApp.Windsor project that you can use in all .NET MVC 5 or WebApi projects for DI, which you can build and use as a dependency for any number of projects without modification.

DI will definitely help you more loosely couple the components of your application, as well as making changes in your application code much easier. You can also dependency inject services into your services, data access objects into your data access layer, and so on, making the pattern highly beneficial, especially when an application may use several implementations of the same service, or may be moved to another data store in the future. DI is good programming practice, and moreover it makes code much more pleasant to work in, and much easier to understand.

Another good takeaway is .NET’s approach to frameworks. They use inversion of control heavily, so there is always a point of extensibility that you can override to provide your own custom functionality. If you ever find yourself wishing a .NET library worked differently, remember: Microsoft has probably left the library heavily open to extension, so there’s no need to force it to do your bidding.

That’s all. I hope this becomes very useful in many people’s projects, it definitely has been for me.

Digit Classification with TensorFlow and the MNIST Dataset

Intro

Machine learning has been growing by leaps and bounds in recent years, and with libraries like TensorFlow, it seems like almost anything is possible. One interesting application of neural networks is in classification of handwritten characters – in this case digits.

This article will go through the fundamentals of creating and using a specific kind of network in TensorFlow: a convolutional neural network. Convolutional neural networks are specialized networks used for image recognition, that perform much better than a vanilla deep neural network.

Concepts

Before diving into this project, we will need to review some concepts.

TensorFlow

TensorFlow is more than just a machine learning library, it is actually a library for creating distributed computation graphs, whose execution can be deferred until needed, and stored when not needed.

TensorFlow works by the creation of calculation graphs. These graphs are stored and executed later, within a “session”.

By storing neural network connection weights as matrices, TensorFlow can be used to create computation graphs which are effectively neural networks. This is the primary use of TensorFlow today, and how we’ll be using it in this article.

Convolutional Neural Networks

Convolutional neural networks are networks based on the physical qualities of the human eye. Information is received as a “block” of data, like an image, and filters are applied across the entire image, which transform the image and reveal features which can be used for classification. For instance, one filter might find round edges, which could indicate a five or a six. Other filters might find straight lines, indicating a one or a seven.

The weight of these filters are learned as the model receives data, and thus it gets better and better at predicting images, by getting better and better at coaxing features out using its filters.

There is much more than this to a convolutional neural network, but this will suffice for this article.

The Data

How do we get the data we’ll need to train this network? No problem; TensorFlow provides us some easy methods to fetch the MNIST dataset, a common machine learning dataset used to classify handwritten digits.

Simply import the input_data method from the TensorFlow MNIST tutorial namespace as below. You will need to reshape the data into a square of 28 by 28, since the original dataset is a flat list of 784 numbers per image.

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("/tmp/data")

test_imgs = mnist.test.images.reshape(-1, 28, 28, 1)
test_lbls = mnist.test.labels

train_imgs = mnist.train.images.reshape(-1, 28, 28, 1)
train_lbls = mnist.train.labels

The Network

So how might we build such a network? Where do we start? Well lucky for us, TensorFlow provides this functionality out of the box, so there’s no need to reinvent the wheel.

The first thing that must be defined are our input and output variables. For this, we’ll use placeholders.

X = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.int64, shape=(None), name="y")

Next, we need to define our initial filters. In order to avoid dying/exploding gradients, a truncated normal distribution is recommended for initialization. In our case, we will have two lists of filters for our two convolutional layers.

filters = tf.Variable(tf.truncated_normal((5,5,1,32), stddev=0.1))
filters_2 = tf.Variable(tf.truncated_normal((5,5,32,64), stddev=0.1))

Finally, we need to create our actual convolutional layers. This is done using TensorFlow’s tf.nn.conv2d method. We also use a name scope to keep things organized. Note the max pooling layers between convolutional layers. The max pool layers aggregate the image data from each filter using a predefined method, and are not trained. They simply help reduce the complexity of the data by squashing the many layers produced by our filters.

with tf.name_scope("dnn"):
    convolution = tf.nn.conv2d(X, filters, strides=[1,2,2,1], padding="SAME")
    max_pool = tf.nn.max_pool(convolution, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID")
    convolution_2 = tf.nn.conv2d(max_pool, filters_2, strides=[1,2,2,1], padding="SAME")
    max_pool_2 = tf.nn.max_pool(convolution_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID")
    flatten = tf.reshape(max_pool_2, [-1, 2 * 2 * 64])
    predict = fully_connected(flatten, 1024, scope="predict")
    keep_prob = tf.placeholder(tf.float32)
    dropout = tf.nn.dropout(predict, keep_prob)
    logits = fully_connected(dropout, n_outputs, scope="outputs", activation_fn=None)

Also note that before our prediction layer, we have to squash down the final max pool output to make predictions at our fully connected layer. You can get the shapes of the various layers as shown below, to figure out what size your various layers need to be.

print("conv", convolution.get_shape())
print("max", max_pool.get_shape())
print("conv2", convolution_2.get_shape())
print("max2", max_pool_2.get_shape())
print("flat", flatten.get_shape())
print("predict", predict.get_shape())
print("dropout", dropout.get_shape())
print("logits", logits.get_shape())
print("logits guess", logits_guess.get_shape())
print("correct", correct.get_shape())
print("accuracy", accuracy.get_shape())

We also apply dropout to avoid overfitting, and do not apply an activation function to our outputs. We will instead calculate the entropy manually at each training step, which improves performance.

Now to create our training and evaluation layers. We will also namespace these like the previous layers, to make things easier to understand when they are viewed in a visualization tool like TensorBoard.

Our loss is the average of the cross-entropy between the expected outputs and the output of our logits, this much should make sense.

For training, we use an Adam optimizer, which is almost always recommended. The learning rate used in this article is 1e-4. This is the same learning rate that is used for TensorFlow’s own “expert” tutorial on MNIST.

Our evaluation is a little more complicated. Since we are training with batches, we need to get the output for each item in the batch. We do this by applying tf.argmax to every output list using tf.map_fn. Then, we compare the guesses to the actual values using tf.equal. Our accuracy is the average number of correct predictions (i.e., the percentage of numbers we classified correctly).

with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")
    
with tf.name_scope("train"):
    optimizer = tf.train.AdamOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)

with tf.name_scope("eval"):
    logits_guess = tf.cast(tf.map_fn(tf.argmax, logits, dtype=tf.int64), tf.int64)
    correct = tf.equal(logits_guess, y)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()

To actually train the network, we will need to run through the data several times, running a batch at every iteration. In this case, we will aim for 20,000 iterations. To calculate how many epochs we will need for our batch size, we use the following code.

keep_prob_num = 0.5
batch_size = 50
goal_iterations = 20000
iterations = mnist.train.num_examples // batch_size
epochs = int(goal_iterations / iterations) # so that total iterations ends up being around goal_iterations

Now to actually run the training operation on our graph.

with tf.Session() as sess:
    sess.run(init)
    for i in range(epochs):
        for iteration in range(iterations):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            X_batch_shaped = X_batch.reshape(X_batch.shape[0], 28, 28, 1)
            sess.run(training_op, feed_dict = {
                X: X_batch_shaped, 
                y: y_batch, 
                keep_prob: keep_prob_num})
            print("epoch:",i)
            print("iteration:", iteration)

It’s also recommended that you save the model and evaluate the accuracy at every epoch. You can accomplish this with the following code.

Evaluating

accuracy_val = sess.run(accuracy, feed_dict = {
                   X: train_imgs, 
                   y: train_lbls,  
                   keep_prob: 1.0})
print("accuracy:", accuracy_val)

Saving

saver = tf.train.Saver()
saver.save(sess, save_path)

After running this model through all epochs and iterations, your accuracy should be around 99.2%. Let’s check that.

with tf.Session() as sess:
    saver.restore(sess, save_path) #assume you've saved model, but could run in same session immediately after training
    accuracy_val = sess.run(accuracy, feed_dict = {
                       X: test_imgs, 
                       y: test_lbls,  
                       keep_prob: 1.0}) # test accuracy
    t_accuracy_val = sess.run(accuracy, feed_dict = {
                         X: train_imgs, 
                         y: train_lbls,  
                         keep_prob: 1.0}) # training accuracy
    print("accuracy:", accuracy_val)
    print("train accuracy:", t_accuracy_val)

Of course, in the above, the test accuracy is what’s most important, as we want our model to generalize to new data.

Improvements

There are several steps you can take to improve on this model. One step is to apply affine transformations to the images, creating additional images similar but slightly different than the originals. This helps account for handwriting with various “tilts” and other tendencies.

You can also train several of the same network, and have them make the final prediction together, averaging the predictions or choosing the prediction with the highest confidence.

Conclusion

TensorFlow makes digit classification easier than ever. Machine learning is no longer the domain of specialists, but rather should be a tool in the belt of every programmer, to help solve complex optimization, classification, and regression problems for which there is no obvious or cost-effective solution, and for programs which must respond to new information. Machine learning is the way of the future for many problems, and as has been said in another blogger’s post: it’s unreasonably effective.

Design Patterns in JavaScript — Revisited

Intro

My original post on this subject did not dive deep into true “design patterns” but rather on basic inheritance in JavaScript. Since inheritance can be done in multiple ways in JavaScript, how you choose to inherit is itself a design pattern.

This particular article will look into implementing common OOP design patterns in JavaScript, without violating the principles of those designs. Many online examples of design patterns in JavaScript violate these principles. For instance, many versions of the Singleton pattern are not inheritable, which defeats the purpose of the Singleton. And oftentimes, you can also create instances of them. This article assumes you already know these patterns and why they are used, and simply want to see their implementation in JavaScript.

JavaScript is an interesting language. It is itself based on a design pattern: prototypes. Prototypes are speed-conserving but memory-using instances of objects that define initial values for much of the object definition. This is exactly analogous to setting prototype properties on a new object in JavaScript.

Now this naturally leads to some limitations, which can become readily apparent in the implementations below. If you’d like to contribute to a library that tries to escape some of these limitations, you can contribute to ClassJS on GitHub. (Disclaimer: It’s my project).

I suggest you run all of these examples online as you read through.

Anyways, let’s get down to business. Here are the 10 common design patterns we will go over:

  1. Singleton
  2. Abstract Factory
  3. Object Pool
  4. Adapter
  5. Bridge
  6. Decorator
  7. Chain of responsibility
  8. Memento
  9. Observer
  10. Visitor

Singleton

Now, it’s easy enough to get something in JavaScript that looks like a Singleton. But I’ll show you how to write something that actually behaves like one. I.e.:

  1. You cannot instantiate it
  2. It holds one single read-only instance of itself
  3. You can inherit from it

Nearly all implementations you’ll find miss out on one of these points, especially 1 or 3. To accomplish these in JavaScript, you’ll need to hide the instance in a closure, and throw an exception when the constructor is called from external code. Here’s how that works:

var Singleton = (function(){
	var current = {};
    
	function Singleton(){
   		if(this.caller !== this.getCurrent && this.caller !== this.copyPrototype){
        	throw 'Cannot instantiate singleton';
    	}
	}
    
  Singleton.prototype.sayHello = function(){
    	console.log('hi');
	};

	Singleton.getCurrent = function(){
  		// current is dictionary of type to instance
      // new this() creates new instance of calling prototype
    	current[this] = (current[this] || new this());
      return current[this];
	};

	// we have to relax the rules here a bit to allow inheritance
  // without modifying the original protoype
	Singleton.prototype.copyPrototype = function(){
    	return new this.constructor();
  };

	return Singleton;
})();


function SpecialSingleton(){
  // ensure calling from accepted methods
	Singleton.call(SpecialSingleton);
}

// copy prototype for inheritance
SpecialSingleton.prototype = Singleton.getCurrent().copyPrototype();

SpecialSingleton.getCurrent = Singleton.getCurrent;

SpecialSingleton.prototype.sayHelloAgain = function(){
	console.log('Hi again');
};

var singleton = SpecialSingleton.getCurrent();
// base class method
singleton.sayHello();
// derived method
singleton.sayHelloAgain();

// throws error
var special = new SpecialSingleton();

Notice that we also define a copyPrototype method. This is necessary so that the shared prototype does not get altered when we create other sub-classes. We could also serialize and deserialize the prototype with a special JSON reviver that handles functions, but that would make the explanation harder to follow.

Abstract Factory

A closely related pattern of course, is the Abstract Factory, which itself is generally a Singleton. One thing to note here is that we do not enforce the type that is returned. This is enforced at runtime as an error will be thrown if you call a method that does not exist.

//  Singleton from above

// create base prototype to make instances of
function Thing(){
}

Thing.prototype.doSomething = function(){
    console.log('doing something!');
};

// create derived prototype to make instances of
function OtherThing(){
}

// inherit thing prototype
OtherThing.prototype = new Thing();
// override doSomething method
OtherThing.prototype.doSomething = function(){
    console.log('doing another thing!');
};

function ThingFactory(){
    Singleton.call(ThingFactory);
}

ThingFactory.prototype = Singleton.getCurrent().copyPrototype();

ThingFactory.getCurrent = Singleton.getCurrent;

ThingFactory.prototype.makeThing = function(){
    return new Thing();
};

function OtherThingFactory(){
    Singleton.call(OtherThingFactory);
}

// need to use instance or prototype of original is overridden
OtherThingFactory.prototype = ThingFactory.getCurrent().copyPrototype();

OtherThingFactory.getCurrent = ThingFactory.getCurrent;

OtherThingFactory.prototype.makeThing = function(){
    return new OtherThing();
};

var things = [];
for(var i = 0; i < 10; ++i){
    var thing = ThingFactory.getCurrent().makeThing();
    things.push(thing);
}

for(var i = 0; i < 10; ++i){
    var thing = OtherThingFactory.getCurrent().makeThing();
    things.push(thing);
}

// logs 'doing something!' ten times, then 'doing something else!' ten times
things.forEach(function(thing){ thing.doSomething(); });

Object Pool

Our resource pool in this case is also a singleton. When a resource is requested and there are not enough to meet the demand, an exception is thrown back to the client, who is expected to then release a resource before calling again.

// ... singleton from first example

function Resource(){
}

Resource.prototype.doUsefulThing = function(){
	console.log('I\'m useful!');
};

var ResourcePool = (function(){
	var resources = [];
  var maxResources = Infinity;
  
  function ResourcePool(){
    // ensure calling from accepted methods
    Singleton.call(ResourcePool);
  }

  // copy prototype for inheritance
  ResourcePool.prototype = Singleton.getCurrent().copyPrototype();

  ResourcePool.getCurrent = Singleton.getCurrent;

  ResourcePool.prototype.getResource = function(){
    if(resources.length >= maxResources){
    	throw 'Not enough resource to meet demand, please wait for a resource to be released';
    }
    var resource = new Resource();
    resources.push(resource);
    return resource;
  };

  ResourcePool.prototype.releaseResource = function(resource){
    resources = resources.filter(function(r){
    	return r !== resource;
    });
  };
  
  ResourcePool.prototype.setMaxResources = function(max){
  	maxResources = max;
  };

  return ResourcePool;
})();

function NeedsResources(){
}

NeedsResources.prototype.doThingThatRequiresResources = function(){
	var lastResource;
  for(var i = 0; i < 11; ++i){
  	try{
         lastResource = ResourcePool.getCurrent().getResource();
         lastResource.doUsefulThing();
    }catch(e){
    	// requested too many resources, let's release one and try again
      ResourcePool.getCurrent().releaseResource(lastResource);
      ResourcePool.getCurrent().getResource().doUsefulThing();
    }
  }
};

ResourcePool.getCurrent().setMaxResources(10);

var needsResources = new NeedsResources();
needsResources.doThingThatRequiresResources();

Adapter

Our adapter is rather simple. We only interface with the modern door, but when we tell that door to open, it interfaces with the ancient door, without us having to understand the underlying implementation.

function AncientDoorway(){

}

AncientDoorway.prototype.boltSet = true;
AncientDoorway.prototype.counterWeightSet = true;
AncientDoorway.prototype.pulleyInactive = true;

AncientDoorway.prototype.removeBolt = function(){
	this.boltSet = false;
};

AncientDoorway.prototype.releaseCounterWeight = function(){
	this.counterWeightSet = false;
};

AncientDoorway.prototype.engagePulley = function(){
	this.pulleyInactive = false;
};

function DoorwayAdapter(){
	this.ancientDoorway = new AncientDoorway();
}

DoorwayAdapter.prototype.open = function(){
	this.ancientDoorway.removeBolt();
  this.ancientDoorway.releaseCounterWeight();
  this.ancientDoorway.engagePulley();
};

DoorwayAdapter.prototype.isOpen = function(){
	return !(
  	this.ancientDoorway.boltSet || 
    this.ancientDoorway.counterWeightSet || 
    this.ancientDoorway.pulleyInactive
  );
};

var someDoor = new DoorwayAdapter();
// false
console.log(someDoor.isOpen());
// uses ancient interface to open door
someDoor.open();
// true
console.log(someDoor.isOpen());

Bridge

Our bridge object delegates its responsibilities to some other class. The only thing it knows about this class is which methods it supports. At runtime, we can swap out various implementations, which have different behavior in the client class.

function BaseThing(){

}

BaseThing.prototype.MethodA = function(){};
BaseThing.prototype.MethodB = function(){};

// if you wanted this to be truly private, you could check
// calling method, or wrap whole prototype definition in closure
BaseThing.prototype._helper = null;

BaseThing.prototype.setHelper = function(helper){
	if(!(helper instanceof BaseThingHelper)){
  	throw 'Invalid helper type';
  }
	this._helper = helper;
};

function Thing(){

}

Thing.prototype = new BaseThing();

// delegate responsibility to owned object
Thing.prototype.methodA = function(){
	this._helper.firstMethod();
};

Thing.prototype.methodB = function(){
	this._helper.secondMethod();
};

function BaseThingHelper(){

}

BaseThingHelper.prototype.firstMethod = function(){};
BaseThingHelper.prototype.secondMethod = function(){};


function ThingHelper(){

}

ThingHelper.prototype = new BaseThingHelper();

ThingHelper.prototype.firstMethod = function(){
	console.log('calling first');
};
ThingHelper.prototype.secondMethod = function(){
	console.log('calling second');
};

function OtherThingHelper(){

}

OtherThingHelper.prototype = new BaseThingHelper();

OtherThingHelper.prototype.firstMethod = function(){
	console.log('calling other first');
};
OtherThingHelper.prototype.secondMethod = function(){
	console.log('calling other second');
};

var thing = new Thing();
// set helper for bridge to use
thing.setHelper(new ThingHelper());

thing.methodA();
thing.methodB();
// swap implementation
thing.setHelper(new OtherThingHelper());

thing.methodA();
thing.methodB();

Decorator

Our decorator prototypes delegate responsibility to their base classes, while adding additional functionality. They are instantiated by passing an object of the same type for them to wrap. When the calls propagate all the way to the base class, the original wrapped object’s method is called.

// LCD prototype
function BaseThing(){}

BaseThing.prototype.doSomething = function(){};

// implementation (client code)
function Thing(){}

Thing.prototype = new BaseThing();
Thing.prototype.doSomething = function(){};

// wrapper classes for decoration
function ThingWrapper(wrappedObject){
	if(!wrappedObject){
  	return;
  }
	if(!(wrappedObject instanceof BaseThing)){
  	throw 'Invalid wrapped prototype type';
  }
	this._wrappedObject = wrappedObject;
}

ThingWrapper._wrappedObject = null;
ThingWrapper.prototype = new Thing();
ThingWrapper.prototype.doSomething = function(){
	// delegate to wrapped class
  this._wrappedObject.doSomething();
};

function CoolThing(wrappedObject){
	ThingWrapper.call(this, wrappedObject);
}

CoolThing.prototype = new ThingWrapper();
CoolThing.prototype.doSomething = function(){
	ThingWrapper.prototype.doSomething.call(this);
	console.log('doing something cool!');
};

function AwesomeThing(wrappedObject){
	ThingWrapper.call(this, wrappedObject);
}

AwesomeThing.prototype = new ThingWrapper();
AwesomeThing.prototype.doSomething = function(){
	ThingWrapper.prototype.doSomething.call(this);
  console.log('doing something awesome!');
};

var wrappedThing = new AwesomeThing(new CoolThing(new Thing()));
wrappedThing.doSomething();

var x = new ThingWrapper();

Chain of Responsibility

With chain of responsibility, various handlers are created for different events. Multiple handlers can handle multiple events, and multiple handlers may exist for the same event. All handlers keep a reference to the next handler, and handlers delegate their responsibility to the base class if they cannot handle an event. In this case, the base class will then ask the next handler to handle the event, and so on. The last handler handles all events, so we don’t have to worry about an event going nowhere and the cycle continuing forever.

var EventTypes = {
	Magic: 0,
	Cool: 1,
  Awesome: 2
};

function Handler(){}

Handler.prototype._nextHandler = null;

Handler.prototype.addHandler = function(handler){
	if(!(handler instanceof Handler)){
  	throw 'Invalid handler type';
  }
  // if it already has a handler, append the handler to the next one
  // this process will propagate to the end of the chain
  if(!this._nextHandler){
		this._nextHandler = handler;
  }else{
  	this._nextHandler.addHandler(handler);
  }
};

// tell the next handler to try to handle the event
Handler.prototype.execute = function(eventType){
	this._nextHandler.execute(eventType);
};

function CoolHandler(){}
CoolHandler.prototype = new Handler();
CoolHandler.prototype.execute = function(eventType){
	if(eventType !== EventTypes.Cool){
  	console.log('delegated uncool event');
        // tell the base handler to pass it to another handler
  	return Handler.prototype.execute.call(this, eventType);
  }
  console.log('handled cool event');
};

function AwesomeHandler(){}
AwesomeHandler.prototype = new Handler();
AwesomeHandler.prototype.execute = function(eventType){
	if(eventType !== EventTypes.Awesome){
  	console.log('delegated non-awesome event');
  	return Handler.prototype.execute.call(this, eventType);
  }
  console.log('handled awesome event');
};

function AnythingHandler(){}
AnythingHandler.prototype = new Handler();
AnythingHandler.prototype.execute = function(eventType){
  console.log('handled any event');
};

var root = new Handler();
root.addHandler(new CoolHandler());
root.addHandler(new AwesomeHandler());
root.addHandler(new AnythingHandler());

root.execute(EventTypes.Cool);
root.execute(EventTypes.Awesome);
root.execute(EventTypes.Magic);

Memento

Memento’s can be very useful in JavaScript, such as when storing the application state in localStorage to be loaded when the session starts again.

In this case, we are simply saving a count variable, and restoring that count when we want. This causes the count to start all over again, before we call increment a few more times.

function Saveable(){
	this._count = 0;
}

Saveable.prototype.save = function(){
	return new SavedState(this._count);
};

Saveable.prototype.restore = function(savedState){
	this._count = savedState.getState();
  console.log('count reset to ' + String(this._count));
};

Saveable.prototype.increment = function(){
	++this._count;
};

Saveable.prototype.logValue = function(){
	console.log(this._count);
};

function SavedState(count){
	this._count = count;
}

SavedState.prototype.getState = function(){
	return this._count;
};

// state manager holds reference to thing that can be saved, and acts on it
function StateManager(){
  this._saveable = new Saveable();
}

StateManager.prototype.getSavedState = function(){
	return this._saveable.save();
};

StateManager.prototype.setSavedState = function(savedState){
	this._saveable.restore(savedState);
};

StateManager.prototype.increment = function(){
	this._saveable.increment();
  this._saveable.logValue();
};

// logs 1,2,3
var stateManager = new StateManager();
for(var i = 0; i < 3; ++i){
	stateManager.increment();
}
// state is now 3
var memento = stateManager.getSavedState();
// logs 4,5,6
for(var i = 0; i < 3; ++i){
	stateManager.increment();
}
// state restored to 3
stateManager.setSavedState(memento);
// logs 4,5,6 again
for(var i = 0; i < 3; ++i){
	stateManager.increment();
}

Observer

Observer is a competing pattern in JavaScript with pub/sub. Pub/sub is oftentimes somewhat easier to implement given the event-driven nature of JavaScript.

Use observer over pub/sub when you want the handlers and subjects to be more closely integrated, when your events flow in one direction, or when you want shared functionality in all observing or observed objects.

function Person(name){
	this._observers = [];
}

Person.prototype.name = '';

Person.prototype.setName = function(name){
	this.name = name;
  this._observers.forEach(function(observer){
  	observer.update();
  });
};

Person.prototype.observe = function(observer){
	this._observers.push(observer);
};

function Observer(subject){
	this._subject = subject;
}

Observer.prototype.update = function(){};

function NameObserver(subject){
	Observer.call(this, subject);
}

NameObserver.prototype = new Observer();

NameObserver.prototype.update = function(){
	console.log('new name: ' + this._subject.name);
};

function NameLengthObserver(subject){
	Observer.call(this, subject);
}

NameLengthObserver.prototype = new Observer();

NameLengthObserver.prototype.update = function(){
	console.log('new length of name: ' + this._subject.name.length);
};

var person = new Person();
person.observe(new NameObserver(person));
person.observe(new NameLengthObserver(person));
// all observers all called for each change
// logs new name, then length of 8
person.setName('deadpool');
// logs new name, then length of 9
person.setName('wolverine');

Visitor

The visitor pattern relies on polymorphism to cause correct handlers to be called. Since JavaScript does not have type-based method signatures, we instead create methods like so: ‘visit’ + elementTypeName, and call these on the visitor classes.

This also means that we need to check that the methods exist, and log or throw an exception when there is no valid handler; and that we need to store the type names of each prototype, since JavaScript provides no easy way to see the most-derived type.

This pattern allows us to handle each element in a list in a different way depending on its type, without having to add various method implementations to each one; and to handle each element in multiple ways depending on what visitor is visiting the element.

function Visitor(){}

Visitor.prototype.visit = function(element){
  if(!(('visit' + element.typeName) in this)){
  	return console.log('No handler for element of type ' + element.typeName);
  }
  // redirect to type-specific visit method
  this[('visit' + element.typeName)](element);
};

function Element(){}

Element.prototype.typeName = 'Element';

Element.prototype.accept = function(visitor){
	visitor.visit(this);
};

function CoolElement(){}

CoolElement.prototype = new Element();
CoolElement.prototype.typeName = 'CoolElement';

function AwesomeElement(){}

AwesomeElement.prototype = new Element();
AwesomeElement.prototype.typeName = 'AwesomeElement';

function CoolAwesomeVisitor(){}

CoolAwesomeVisitor.prototype = new Visitor();

// define type-specific visit methods to be called
CoolAwesomeVisitor.prototype.visitCoolElement = function(element){
	console.log('cool awesome visitor visiting cool element');
};

CoolAwesomeVisitor.prototype.visitAwesomeElement = function(element){
	console.log('cool awesome visitor visiting awesome element');
};

function AwesomeVisitor(){}

AwesomeVisitor.prototype = new Visitor();

AwesomeVisitor.prototype.visitAwesomeElement = function(element){
	console.log('awesome visitor visiting awesome element');
};

var visitors = [
	new CoolAwesomeVisitor(),
  new AwesomeVisitor()
];

var elements = [
	new CoolElement(),
  new AwesomeElement()
];

elements.forEach(function(element){
	visitors.forEach(function(visitor){
    element.accept(visitor);
  });
});

Conclusion

So that’s all for patterns now! I think this is the longest post I’ve ever written, and I intend to keep expanding on this as a good resource.

If you want to know more about these patterns in general, and what they’re used for, I highly recommend sourcemaking.

Notes

Oftentimes, you’ll see me wrap a class definition in a module like so:

var Class = (function(){
   var private = {};
   function Class(){}
   Class.prototype.setPrivate = function(value){
       private[this] = value;
   };
   Class.prototype.getPrivate = function(value){
      return private[this];
   };
})();

The reason for this is fairly intuitive. In JS, you have to choose between two things: inheritance, and data hiding. You can’t have something akin to a private variable inherited by sub-classes. I’ll show you two common patterns that illustrate this.

function Class(){
    var private;
    this.setPrivate = function(value){
        private = value;
    };
    this.getPrivate = function(value){
        return private;
    };
}

Well… the variable is private. However, those getters and setters won’t be inherited, because they’re not on the prototype. You can technically solve this by calling the parent constructor with the sub-class as the caller, but I prefer the pattern I use.

The other possibility is this:

function Class(){}

Class._private = null;
this.setPrivate = function(value){
    this._private = value;
};
this.getPrivate = function(value){
    return this._private;
};

This is a little better is some ways, and worse in others. Our data is no longer hidden, and we’re relying on the developer, and naming convention to deter programmers from accessing it. The properties will be inherited from the prototype, however.

Because of the reasons above, I tend to use the first pattern as a best practice, but depending on the situation any one of these may work fine.

CSS Stacking Contexts

Intro

Today we’ll be learning about a lesser-known feature of CSS: Stacking contexts.

You may have been working on a project before and been surprised when you set the z-index for an element and it refused to move forward, remaining behind some other stubborn element.

There’s a reason for this behavior, and that reason is stacking contexts.

Stacking Context

A stacking context is essentially a group of elements whose z-index value determines their position within that group. If two elements do not share a stacking context, then they will ignore each other’s z-index values.

In this case, the stacking order is based on their relative order in the DOM (See image under “Creating a Stack”).

Creating a Stack

All of the common stacking context types.
All of the common stacking context types. Order is relative, fixed, absolute, opacity, transform.

A stacking context is created in the following cases:

  • The root stacking context (html element)
  • Absolute or relative position with a set z-index
  • Fixed position
  • Opacity less than 1
  • A set transform
  • A few other less common instances

I’ll be covering only the common instances that developers will normally encounter.

The Root Stacking Context

This case is pretty clear. Initially, all elements are part of a single stacking context under the DOM, meaning that their relative position on the z axis is determined entirely by their z-index property. If no z-index is set, their order is determine by the order in which they appear in the DOM (See image under “Creating a Stack”).

Absolute or Relative Position With a Set Z-Index

This case is the second-most common. This is almost always intentional, but occasionally, developers may try to position an element in another stacking context over some absolutely positioned element and find that it’s not possible.

Fixed Position

Another common case, but one that can be confusing. Most but not all browsers have this behavior now. Fixed position elements create their own stacking context, which without a z-index normally places it behind the document root. This can create a case of disappearing elements.

Opacity less than 1

This is a rare case, but one that everyone should be aware of. If you’re going to set opacity, then you have to know the consequence will be a new stacking context. If all you want is a translucent element, it will be more predictable if you simple set an rgba background with an alpha less than 1.

The reason for this is clear: If it did not create a new stacking context, what elements would show through the transparent element?

A Set Transform

This is a case which is more and more common lately, as CSS transforms become the norm. This often throws people off, as we assume when we scale an element it should retain its position in the flow of the document. The new stacking context can cause a transformed element to hide menus and other elements which would normally appear in front.

How Stacking Contexts Interact

Of course, the most important thing is how to apply this knowledge to create layouts and fix problems in the real world. For this reason, I’ve supplied some examples of how stacking contexts interact with each other. Most importantly, how do their children determine their z-positioning relative to other stacking context’s children?

Well, using the example from “Creating a Stack”, here’s what happens:

Z-Index Set

Z-Index on Relative Element's Children
Z-Index on Relative Element’s Children

If we set the z-index of the child elements, the result is the same as our original elements.

Z-Index Positive, Position Relative

Z-Index on Relative Element's Children - Children Are Relatively Positioned
Z-Index on Relative Element’s Children – Children Are Relatively Positioned

If we set the z-index of the child elements to a positive value, but additionally set the children’s positions to relative (creating a new stacking context for each child), then they will position completely independently of their parent, moving out in front of the other elements.

Z-Index Negative, Position Relative

Negative Z-Index on Relative Element's Children - Children Are Relatively Positioned
Negative Z-Index on Relative Element’s Children – Children Are Relatively Positioned

If we set the z-index of the child elements to a negative value, but additionally set the position to relative (creating a new stacking context for each child), then they will position completely independently of their parent, moving behind the other elements.

Z-Index Greater Than Other Stacking Context’s Children

Relative and Fixed Element with a Set Z-Index, Children With Set Z-Index Values
Relative and Fixed Element with a Set Z-Index, Children With Set Z-Index Values

In this case, we have given both the relative element, and the fixed element a z-index. The z-indices of their children do not interact, so even though the relative children are positioned ahead of the fixed children, they do not appear that way. The children are each in separate stacking contexts, though their parents share the same stacking context.

Conclusion

Stacking contexts are groups of elements whose z-index values position them along the y axis relative to each other. If an element is the root of a stacking context, its children will ignore the z-index values of the children of other stacking contexts, even if they are larger than its own.

Stacking contexts are very important when creating layouts in CSS. A lack of understanding of stacking contexts can lead to difficulty implementing relatively simple UIs, and in fixing bugs which arise commonly in today’s UIs. Stacking contexts are very commonly created when showing things like menus, popups, windows, etc. These types of UI controls are very common in web applications today, and therefore so is knowledge of stacking contexts and how they interact.

Representational State Transfer (REST)

What is REST?

REST is an architecture which describes a system that transfers generally non-static content between a client and server. This content is called a resource, and is always some uniquely identifiable “thing”. RESTful services implemented on top of HTTP are a popular solution for web applications today.

Representational

REST is representational in the sense that every request must uniquely identify a resource. A resource is something which is uniquely identifiable. The meaning of uniquely identifiable can essentially be defined by the system, and is dependent on the level of granularity at which the system works.

For instance, on one system, perhaps a hospital is a unique resource, but on another system, each of the hospital’s buildings are considered independently, so you could not for instance request the completion date of the hospital, but only of a specific hospital building.

State

A RESTful service always returns stateful data. That is, it is returning the current or specified state of the specific resource, which is not necessarily and not usually composed of static data.

For instance, suppose that our hospital added a new wing. The resource representing the hospital, if it is a correct stateful representation, would then reflect this new building. Any request made prior to the addition of the new wing would return it’s current state – without the new wing.

This should not be confused with the statelessness of the requests. A RESTful server maintains no data about the state of the client or its requests.

Transfer

Transfer of course refers to the movement of data between a client and server. Data can flow both ways in a RESTful service, which usually supports the CRUD operations in addition to request types like HTTP OPTION, etc.

How did REST come about?

First, HTTP

The Hypertext Transfer Protocol, or HTTP, was the necessary precursor to what we consider a modern implementation of a RESTful service. HTTP provides a client-server architecture that focuses on text-based requests of documents. Because the text-based requests use URIs (unique resource identifiers), they are uniquely suited for use in a REST implementation, which is based on the concept of resources.

HTTP was originally proposed by Tim Berners-Lee, as a document storage and retrieval system between remote clients and servers. The original HTTP had only one method, GET, meaning that it would not be as suitable for a REST implementation as it is today.

HTTP soon added many methods, which made HTTP suited for a REST implementation. These methods included POST, PUT, and DELETE, which are used in today’s RESTful services to represent update, create, and delete operations respectively.

The Concept

The concept was coined in 2000 for a PhD dissertation by Roy Fielding. The concepts behind REST were used as the backbone for the URI standard used in HTTP requests. HTTP was therefore RESTful in its initial implementation (v 1.1). The difference between this and modern REST concepts is that the resource can be many more things than simply a static HTML document.

From this point, RESTful concepts were heavily adopted in the Web 2.0 age of asynchronous requests which loaded content in real-time into the browser. RESTful concepts enabled relatively simple and very consistent APIs to be created which abstracted this process heavily and eased implementation of complex applications handling asynchronous data requests.

What makes something RESTful?

There are five aspects necessary for a system to be considered RESTful, and one optional.

Client-Server Interactions

The architecture must start with a client-server model, where a single server hosts the unique resource, which a client may request.

Stateless Requests

This constraint means that the server cannot store session data from a client. Each request must include the session data necessary to execute the request.

Cacheability

Requests much be created in such a manner as to be identifiably cacheable or not. This allows an intermediate component to perform caching, without special system knowledge.

Layered Architecture

Each point of processing should not have awareness of other parts of the processing chain.

Uniform Interface

The system must define a consistent API, which decouples the requests from the implementation.

Transfer of Logic

An additional concept sometimes considered is the ability to transfer logic representations that can be executed on the client. This includes scripts, applets, etc. Many people are surprised to learn that this idea is part of the original PhD dissertation, and that implementation did not catch up to the possibility of the concepts for about a decade.

What is the purpose of REST?

The purpose of REST is to provide an architecture which creates sufficient abstraction in a large complex, distributed system of unique resources, so that a client-server model of resource access, alteration, and creation, can occur without significant complexity and overhead, even in a system of global scale such as the World Wide Web has become today.

How is REST Used?

REST today is the backbone of HTTP. All common HTTP requests are stateless, and conform to all the five criteria for a REST implementation.
However, the more common modern usage of REST today is in the implementation of a RESTful service on top of HTTP. These services are used to provide a layer of abstraction from data access and representation, so that client code can easily manipulate the resulting structures and compose requests to interact with this data.

Many times, the RESTful service is implemented as an API for simple external access, with an authentication scheme built on top of it. These heavily abstracted interfaces can allow several entirely tangent applications to consume and alter data in a manner consistent with their implementation, while maintaining a single distinct data source.
It also can allow for the distribution of the total request load across several servers, which simply have to be aware of the API implementation and a data source.

Conclusion

REST is an important architectural model that defines itself as a set of five or six restrictions on top of an “unbounded” architecture. REST is not any one implementation, or any one concept or use case. It’s a highly extensible architecture that drives the web as we know it today, but is independent of its implementation in HTTP.

Many descriptions of REST are overly academic or too specific to a single implementation. I hope that I’ve provided a good resource on the fundamental meaning and purpose of REST, independent of HTTP, as well as its use in HTTP and web services today.

Given that this is a complex topic, on which all information essentially traces itself back to that single dissertation, it’s possible that some information may be inaccurate, so please let me know if you find these types of mistakes and they will be corrected immediately.

The CSS Box Model

Intro


One of the most poorly understood components of a web application is the styling. Many of the developers I’ve worked with haven’t taken the time to learn the principles that CSS relies on — especially how the rules cascade — and how padding, margins, borders, and content create the layouts of a page.

The latter is called the “box model” and is what we’ll be looking at today.

The box model is composed of four parts:

  • Content
  • Padding
  • Borders and
  • Margins

Content


CSS Box Model Content
CSS Box Model Content

In a block element, the content area is determined by:

  • The height and width, if set, otherwise
  • The height and width of its content

In an inline element (almost anything directly containing text) the content area is determined by:

  • The line-height and width, if set, otherwise
  • The height of a line (font size), and the width of its container

It’s important to note that inline element’s borders, padding, and margins will apply to each line that the content appears on.

Padding


CSS Box Model Padding
CSS Box Model Padding

Padding is the space between the border and the edge of the element’s content area. You can think of it as a margin between the border and content.

Background colors only apply to the content area, and padding space.

Borders


CSS Box Model Border
CSS Box Model Border

Borders begin immediately outside of the content area, which is essentially all you need to know. This is true for both inline and block elements.

Background colors apply to all space inside the border.

Margins


CSS Box Model Margin
CSS Box Model Margin

Margins begin just outside of the border, and determine the space between it and the elements around it.

Box-sizing


The CSS box-sizing property can cause exceptions to the above rules. The two valid values for box-sizing are:

  • content-box
  • border-box

Content-box

Our original CSS Box
Our original CSS Box

Content box is the default value for this property in CSS, and means that elements will behave as shown above. In other words, the content determines height and width, or if set explicitly, the height and width control the size of the content area.

This means that an element which has a width of 50px with 5px of padding, a 1px border, and 3px of margins would take up 50px + (5px + 1px + 3px) * 2 (two sides) = 68px of width.

Border-box

The same box but using border-box
The same box but using border-box

In this case, the width and height, when set, control the size of the element including content, padding and borders. Only the margins are not included.

This means that an element which has a width of 50px with 5px of margins would take up 50px + 5px * 2 (two sides) = 60px of width, regardless of padding or borders.

Padding-box

This value is not supported is most browsers, but if/when supported the height and width would include both the content and padding.

Margin collapsing


Another important exception to these rules is margin collapsing. Margin collapsing means that instead of two element’s margin’s being “added” together, they simply lay on top of one another, and the larger margin is displayed.

Margin collapsing occurs in 3 basic cases:

  • Adjacent sibling elements
  • Parent whose first or last child’s margins collide with parent margin
  • Empty elements

Adjacent sibling elements

Adjacent Siblings Margin Collapse
Note how the center margin is 15px instead of 30px

If two tags are located one after another then their margins will collapse.

Parent with first/last child margin collision

This occurs when the top margin of a parent element “touches” the top margin of its first child, or when the bottom margin of a parent element “touches” the bottom margin of its last child.

In either of these cases, the child element margin is “pushed” outside of the parent element, and the larger of the two is what will be displayed.

Empty blocks

When a box’s top and bottom margins touch (because there is no content), then the margins will collapse. Meaning it will essentially only have one margin, which is the largest of the two.

Summary


The CSS box model is very simple — once you understand and apply it — knowing these fundamental rules of CSS layouts, as well as the gotchas that can occur, should help you make quick work of many common layout problems.