AWS Lambda functions via Netlify

This is note 4 of 5 in a series on publishing automation.

If you're a frontend developer in 2020, the serverless movement has given you agency that you've never had before: writing server-side code without all the nuance and pain of infrastructure configuration. Netlify, a platform for hosting and continuously deploying your site without touching tools like Jenkins or Bamboo, takes the serverless idea and pushes the bill even further with Netlify Functions. Now, creating a call-able REST API that runs an AWS Lambda function is as simple as creating a directory in our repo, flipping a switch in Netlify and writing the code.

To achieve the dream of using Bear as an authoring tool for the web, I decided to leverage the power of Netlify functions to write a service callable via Scriptable in iOS. The complete flow shapes up like this:

Define the scope of our function

Before hacking away, it's helpful to define exactly what the function should achieve:

With this spec, let's dive deeper into the code required to pull it off.

But first, some setup

The whole point of using Netlify over the cloud platforms like AWS, Azure and GCP directly is to avoid all the required steps before you can actually write your code: setting up a billing account, provisioning permissions, configuring API gateways, etc. The luxury of Netlify Functions is we don't even need an account, and the same generous free tier you get on all the platforms (~125k requests and 100 hours of run time) applies here too. Still, you do have to do a couple steps of setup:

Write the code

To see the finished function currently in production, check it out on GitHub - I open sourced this site recently! 🍻

Now that we have our path declared and our workflow setup, we can commence with the actual programming. A very useful set of examples can be found on this playground site, progressing from hello world to writing a Slack integration. Copy and paste some code to get a feel for it.

For our case, the root function looks like this:

const { createBlogPost } = require('./src/actions/create');
const { BEAR_TO_CONTENTFUL } = process.env;

/**
 * A lambda function to create/update and publish blog posts in Contentful.
 */
exports.handler = async event => {

  // Ensure credentials are passed
  if (!event.headers['bear-to-contentful'] || (event.headers['bear-to-contentful'] !== BEAR_TO_CONTENTFUL)) {
    return {
      statusCode: 401,
      body: "Invalid Credentials"
    };
  }

  // Parse method and take appropriate action
  switch(event.httpMethod) {
    case 'POST':
    case 'PUT':
      return createBlogPost(JSON.parse(event.body));
    default:
      return {
        statusCode: 405,
        body: "Method Not Allowed"
      };
  }
};

We export a single nameless, asynchronous function and pass in the event parameter, which contains all the properties we need to parse the request: headers, body and httpMethod. In this function, we only need two checks:

If both conditions are met, we send the body into our createBlogPost function and try to parse, create and publish to Contentful. The key is that if at any point in the function we want to throw an error or return a successful response, we do that by returning an object with a statusCode and body.

Define an interface

Before diving into the createBlogPost function, we have to consider how our script will parse the things that our Contentful blog post content model requires (more on content modeling in the next post). Even if you're not using Contentful for your cases, your CMS will still need to differentiate between things like your title, description and body, making this step generally relevant. In my case, I need to parse out these pieces:

With that in mind, I created a standard template that all blog posts I write in Bear will use. In doing this, we can create a predictable request body that is more easily parsable in our createBlogPost function. Here's what it looks like in markdown:

# Title
slug: my-slug
date: YYYY-MM-DD
description: A description
categories: First, Second
—-—
The body of my note

Parse the note

In our cloud function, the parse and format utility functions do the regex parsing to break down the markdown above into an object:

/**
 * Parses the input markdown and returns as an object.
 * @param {string} text the raw markdown to process
 */
const parse = text => {
    // Trim everything before first #
    const trimmedText = text.substring(/(.*?)#/s.exec(text)[1].length);

    // Parse categories
    const parsedCategories = /categories: (.*?)\n/.exec(trimmedText)[1];
    const categories = /,/.test(parsedCategories)
      ? parsedCategories.split(', ')
      : [parsedCategories];
    
    // Parse all other values
    return {
      title: /# (.*?)\n/.exec(trimmedText)[1],
      slug: /slug: (.*?)\n/.exec(trimmedText)[1],
      date: /date: (.*?)\n/.exec(trimmedText)[1],
      description: /description: (.*?)\n/.exec(trimmedText)[1],
      categories,
      body: trimmedText.substring(/#(.*)/s.exec(trimmedText)[1].length + 4)
    };
}

/**
 * Formats a flat object into desired Contentful payload.
 * @param {string} text the raw markdown to process
 */
const format = text => {
  const { title, slug, date, description, categories, body } = parse(text);
  console.log(`\n
  Note details:
  —
  title: ${title}
  slug: ${slug}
  date: ${date}
  description: ${description}
  categories: ${categories}
  body: ${body.substring(1, 60)}…
  —
  `);
  return {
    'fields': {
      'title': { 'en-US': title },
      'slug': { 'en-US': slug },
      'date': { 'en-US': date },
      'shortDescription': { 'en-US': description },
      'category': { 'en-US': categories },
      'body': { 'en-US': body }
    }
  }
};

module.exports = {
  format
}

There are dozens of ways we could write the regex, and I'm sure what's written above isn't the most efficient, but it works for our purposes. Once the parsing is done, we can move on to the tricky business of forming the fetch requests to the Contentful Content Management APIs, which will be the subject of the last article in this series:

Previous articles in this series:


Thanks for reading! Go home for more notes.