How we got here

Posted 1 June 2016, by Larkswood Digital,

Our first post is aptly about how this site came to exist. Perhaps we should have called this post, "Hello World"?

As we do with new projects lately, we started with a fresh Zurb Foundation install. Foundation is a responsive front-end framework which comes with a cool set of tools for quickly prototyping new websites, including a simple static site generator combining SASS, Gulp, Panini and HandlebarsJS, with partials and layouts. We were quickly up and running and working on the general look and feel.

Foundation comes with the following structure. I'll give you a brief overview of where things go.

  • dist
  • src
    • assets
      • img
      • js
      • scss
    • data
    • helpers
    • layouts
    • pages
    • partials
    • styleguide

At its most basic, an html file inside the pages folder is usually some basic html content, which is merged with a default.html layout file. Once you've run npm start the gulp script gets to work and combines the page and layout files and saves the resulting complete html file. It also runs the necessary SASS compilation and saves the resulting CSS files in the assets folder. As you'd expect, dist(ribution) is where the final built files end up, ready for viewing in a browser.

Page content and layouts can be broken down into reusable partials but this folder is initially empty.

The styleguide contains a template file with standard html elements, which the gulp script deploys into your dist folder. This styleguide page lets quickly review the current colour scheme, typography rules and other elements you'd expect your website to have.

Data and helpers are empty initially, but can contain json or yml files, and quickly become useful, as we'll see.

Once installed, a quick npm start at the command line builds and loads the website on localhost, complete with browser syncing. When you're ready to build a production ready version with minified CSS/JS and optimised images, you can run npm run build.

Prototyping a build process

At this stage we hadn't thought much about what would power the final production website. This was a small enough project that it didn't warrant a full scale CMS like Umbraco, Wordpress, Kentico or even a file-based CMS like Kirby.

Having played with the simplistic static site generator that came with Foundation, it was at this point that we wondered if a proper static site generator like Jekyll might be a good option, or whether what we had was sufficient. On the surface it certainly seemed like it might. We decided to see how far we could go with it. After all, it gives you basic placeholders like {{page}} and {{title}}, what more could you need...

Exploring what Zurb Foundation could do

We began to consider how we might achieve the functional aspects of the website. We'd certainly want a method for contacting us and realised we might have some issues. As it turns out, it's quite easy to configure Panini to recognise things like PHP extensions and have them build in a similar way to to the static HTML pages. A related issue was that browser sync only supported static html. but you could set up a proxy and use a proper web server instead, such as IIS or Apache. With this we skipped the first hurdle pretty quickly.

Then came the blog and case study sections. The question was how functional would they be? A fully formed blog engine, or something simpler? This was the first time we'd hit a section that might have a landing page and sub pages, and we begun to wonder how best to approach it. A manual approach would be a pain to maintain in the long run, adding in items to the landing page every time we added a new blog post or case study, not to mention if we wanted a "recent post" on the main homepage, so we wondered how we could automate it. This was our second snag.

The problem is that each html page is built by Panini in isolation, and has no knowledge of other pages. This means that you can't automatically generate navigation menus or breadcrumbs. We didn't have any breadcrumbs in our initial prototyping yet, so this wasn't a pressing issue, we'd simply created a navigation partial to include in our layout file to make it consistent on all pages.

Enter data + helpers

You can extend the basic functionality of Panini and Handlebars JS by using data and helpers.

Data comes in two forms: Front matter that can be included on any html page, and json/yml files in the data folder. Front matter looks like this:

---
title: This is a page title
myAttribute: Some value
---

Which allows you to use them in layouts, partials and pages by referring to them like: {{myAttribute}}.

Json files in the data folder would look as you'd expect them to. As an example, a json file named siteData.json with similar keys to the ones above becomes accessible to any page in the form of {{siteData.myAttribute}}. It also allows you to create json arrays.

HandlebarsJS comes with a few simple helpers and block helpers, but also gives you the ability to create your own custom ones. For example, there is a standard {{each}} block, which enables you to loop through a json array, such as:

{{each siteData.blogEntries}}
<h1>{{blogTitle}}</h1>
{{/each}}

We realised we could create a json file containing appropriate attributes which we could then refer to in the blog and case study landing pages. Through some custom helpers we've added some additional functionality, such as a replacement each block helper that allows you to limit the number of items returned from the array (useful for showing the most recent posts on the homepage) and helpers for rendering the date, blog titles, author names, etc. It's all simple javascript that runs during the build process when creating the flat html files.

The problem was that we'd still need to create both the pages themselves, and then add relevant data to the json files, which is only a little better than editing a static landing page anyway. We'd need to automate the generation of the json files themselves.

Generating the data

Once we knew what our data would need to look like, we added these as front matter data at the top of each case study and blog page. We then wrote a couple of node scripts to parse front matter from these pages, initially setting defaults and overriding them if front matter attributes existed. For example:

  • The filename of each blog post, formatted as yyyy-mm-dd-a-blog-title.html provides a default title and blog post date, unless a title and date attribute is given in the front matter of the post itself.
  • Similarly for case studies, a title is extracted from the file name unless a title attribute is provided.
  • It extracts the first element from the HTML content and makes that an excerpt/intro for the post, unless an "excerpt" attribute is supplied.
  • Each blog post can have an "author" attribute, overriding the default "Larkswood Digital".
  • Each case study and blog post can have a draft "status" attribute, which means it'll only be included in the json data in non-production builds. You can commit work in progress blog posts and case studies to git and know that it won't end up on the live site.

Adding these node scripts to the gulp build script meant that running npm start would parse our pages and case studies, generate the json data automatically, and then use these json files in the Panini build process. We could in theory take this a whole step further, and build a json file that mirrors the structure of the entire site and create data for use with site wide navigation elements, but that's a project for another day. UPDATE: As of 2nd August 2016, we now generate a flat site-wide structure for auto generating our sitemap.xml file. If and when we modify the data generation to build the actual hierarchy, we will be able to use this to auto generate breadcrumb and main navigation partials.

The final build process, Git and Deploybot

We use deploybot to monitor for new commits to our Git master branch. Deploybot can run build scripts in a container, before copying the resulting files to the server. This means we don't need to include node modules, generated flat files or json data in our repository. It contains only the src folder, a gulp build script plus bower, node and panini config files. After a commit is detected, Deploybot runs npm install, bower update and npm run build to generate the production ready distribution files, before copying them over to the live server.

And that's how we got here.

share

get in touch

You can leave a comment below, but if you want to talk to us directly...

get in touch »

get updates

comments