JAMStack
blog

How to make e-cards fun with the JAMStack

12/10/2020
Β·
16
Β·
profile photo

Or 'how I wasted a lot of time over-engineering a solution to a problem that doesn't really exist.'

Semi-sarcastic rant about paper

As I write this, Christmas is fast approaching. Three of my friends have birthdays this month. And I have just moved house. All of these events have led to a dramatic increase in the number of cards I have had to deal with recently.
People who know me will understand that this annoys me. Paper annoys me. I go to great lengths to avoid dealing with paper. It makes simple tasks inefficient, more expensive, less fault-tolerant and contributes to damaging the environment. I recently bought a house and was asked to print out the contracts and post two copies despite having filled out the PDF digitally (which I had converted from a poorly formatted word document). I just don't understand how a print out of a digitally created (including a digital signature) form is somehow more legally binding than the same thing over email, but that's a rant for another day.

There are, of course, good reasons for physical greetings cards existing. It feels nice to receive a card with a personalised handwritten message, and when displayed on a mantlepiece or shelf, they can liven up a room with colourful artwork. This rant would be more suited to the other junk mail I receive, but I am English and will therefore complain about anything given a chance.
So what's the problem? Well, unfortunately, not all cards fit that description. A fair few of what I've received this year can only be described as low effort scribbles following a template of 'Dear {name}, {pre-printed message}. From {name}'. Quite honestly, I thought, I'd rather a tacky e-card. At least that's less effort for me to ignore whilst being slightly better for the planet. And if I made something fairly extensible, I could reduce the amount of effort I need to put into writing and sending cards! I mean, in the end, I definitely spent a lot more time on it than I would have done actually writing physical cards, but I see this as a long term investment.
At this point some of you might be thinking 'oh great, he's just going to reinvent the e-card but somehow make it worse', and those that have found this blog after being given one of the 'cards' I have made, will know that this statement is entirely correct. Those of you expecting some profound message or world-changing invention can leave now; that's probably never going to happen here. The only real moral to be learned from this story is 'I don't like paper, so please stop putting it through my letterbox', and perhaps 'I have too much time on my hands and really need to find a more useful place to put it'.
If however, you are interested in the technical side of my creation, want to know what a JAMStack is, or want to know how the cards, and in-fact this entire site, run entirely for free with no permanently running server, then read on.
πŸ†˜
Warning, many TLA's (three letter acronyms) lie ahead
Β 

What's a JAMStack? It sounds delicious

The JAMStack is described here as 'an architecture designed to make the web faster, more secure, and easier to scale'. That's a very marketing-speak way of saying 'its a new fashionable way of building web apps that all the cool kids are doing nowadays'. So it might be more useful to dive a bit deeper and look at some specifics.
The JAM in JAMStack, unfortunately, has no relevance to the tasty sandwich dressing but instead is yet another TLA (because the tech world didn't have enough of those).
  • J is for JavaScript. Most JAMStack sites use javascript for both client-side scripting and the server-side request/response cycle.
  • A is for API. All API routes are decoupled from any big monolithic codebase and instead run individually on-demand using services such as AWS Lambda.
  • M is for Markup. Templated markup should be prebuilt at build time, usually using a site generator for content sites, or a build tool for web apps.
I'm going to cover the two points that were new (and interesting) to me: Pre-rendering, and serverless functions. As always there's lots of better content on these topics out there if you want to learn, this is just my experience.

Pre-rendering

Where possible pages are converted into static files and served from a CDN (content delivery network).
This means that pages will load very quickly even on slow connections, and for the most part, if the user has turned off javascript the site will still function, without much (if any) effort needed from the developer.
But wait a minute, isn't that how the web worked in the good old days? Why is this special?
Well, since computers and internet speeds have gotten faster, it has become far more convenient to build what would once have been traditional desktop programs as websites or 'Web Apps'. The history of Web Apps is long, and you can find more than enough about it elsewhere, but the relevant information here is that SPA's (single page applications) now dominate the web. Facebook, Youtube, Reddit, the BBC, basically most Web Apps made within the last 5-10 years will probably use some form of SPA framework.
And these frameworks are great for complex apps. You write them much like you would write a traditional desktop app, except they automatically run on everything that has a web browser, so no more writing separate code for windows/mac/Linux/iOS/android etc.
The main downside to these, however, is that they result in massive files which results in slow download times and sluggish performance. Often to the extent that even with a recent $1000+ computer and a fast internet connection, you will see noticeable delays when loading and using the app (new Reddit is a good example of this).
This isn't such a big deal when the app is complex like google sheets; you will spend far longer using it than waiting for it to load. But for the small stuff, like for instance an e-card? It's total overkill. No one wants to wait 5 seconds for that. If it loads instantly, then at least 5 seconds less time has been wasted from that person's life.
So the JAMStack solution to this issue, pre-rendering (which uses techniques such as CSR, SSR, SSG, and probably many other confusingly similar three-letter abbreviations), allows you to build a web app in (more or less) the same way that you would build an SPA, while still having the benefits of 'pre-rendered' static pages. This means much quicker load times, greatly improved SEO and accessibility for folks who keep the JS turned off. Nice!
OMG, that was a lot of TLA's... Don't worry though, plenty more to come!

'Serverless' backend

The other main part of a JAMStack is how the backend is managed. Traditionally for non-static sites, you need to set up a server to host your content and write some code to handle requests. Often you will need a database to store things, and an API to allow users to change the things that are stored.
So most of the previously mentioned web apps will have one (or most likely many) servers running 24/7 and needing a team of skilled engineers to manage them. This is, as you might expect, a lot of work, and servers can get very expensive. The monthly server bill for a company I worked for recently was well into the 6 figures.
However for our tiny, relatively simple, and mostly stupid e-card app, we don't want to spend any money on it, and preferably we would spend very little time coding it. We certainly don't want to have any maintenance to do; 'please help, I tried to open your card, but the site is down!' is not something I want to hear on Christmas morning. I'd almost rather write paper cards (almost).
What's the solution then? Well, the JAMStack methodology is to go 'serverless'. Now I think 'serverless' is a stupid term, mainly because its a straight lie, but also because I'm disappointed it wasn't made into a TLA. The name comes from the fact that you don't need to manage a server. You simply write backend code, and something else will deal with the hassle of firing up a server and running your code on it, when (and only when) you actually need it. So if you only get 1 API request, a server will only be working for you for the time it takes it to load your code and deal with that 1 request. This is clearly a huge efficiency benefit, as there are many sites that have dedicated servers running 24/7, but actually, only spend a few seconds a day doing anything. I imagine these servers feel something like Marvin from HHG TTG (just had to get an extra 2 3 TLA's in there).
Now 'serverless' is still a relatively new thing and isn't perfect for every use case. For example, after not being used for a while, the machine running the function will be repurposed. This creates the 'cold start' problem; a new server will have to be set up and ran if one is not already waiting. So it's vital to choose a platform with a quick start-up time. Node.js, Python and Go are all popular choices here.
I'm a Clojure developer, and despite my love for the language, I will admit start-up time is not its strong point (discussed later in the 'What about Clojure?' section)
So I have been forced to deal with the next(.js) best thing, javascript. I already sort of know it and frameworks such as Next.js are made explicitly for building JAMStack sites. I won't go too much into the code in this post, but other than a couple of small exceptions, I didn't even realise I was writing backend code. Next does a lot of magic, and I will probably go into it further in the future, but my review so far on this small app is a very positive one. Though javascript syntax is still vastly inferior to Clojure and I won't miss an opportunity to complain about it.

Get to the app!

Ok ok, so I've rambled about some methodology, and hopefully explained how this site doesn't cost any money (because the servers only spin up when requested, many providers such as Vercel and Netlify offer generous free plans for hosting static sites with 'serverless' functions that don't get too much traffic). Still, now I guess I should talk about how the code actually works. It's probably not much of a mystery for most developers reading this, but there are a couple of interesting things which I shall share.

Overview of the card opening process

  • User clicks the link (which has a generated open graph image for extra personalisationℒ️)
  • The /surprise page is loaded with the name form hidden if the name query param is present
  • User fills out the form and submits
  • Query params store name, greeting and user-defined options
  • The templated '/greeting' page shows greeting and relevant animation
  • A list of audio files are played, some hardcoded (the pre-recorded ones) and some are text which is turned into an audio file via a serverless function
  • Users are forced to listen to my attempts to play the numerous musical instruments I have begun collecting since the COVID lockdown
  • User closes the tab (probably before all of the above steps are complete)
So the interesting things here are the open graph image and the generation of the audio, which both use 'serverless' functions.
The open graph image is specified as an HTML template and generated given the relevant info from the query params. It then converts the HTML to an image by spinning up a headless browser and taking a screenshot using chrome-AWS-lambda and puppeteer, which it then returns in the response. (it was at this point that I wondered if I was over-engineering this Christmas card)
The audio is a combination combining pre-recorded mp3s with files generated by the google TTS API. There are a few reasons I did this server-side.
  • Some clients don't have a default TTS program (Some varients of Linux don't ship with one)
  • TTS defaults to the language of the OS, which may not be English
  • For some reason, iOS will mute TTS (but not HTML Audio) if the silent switch is on. This results in weird missing patches in the message; possibly even alerting the user that this card might not be as personalised as it initially appeared.
  • When done server-side, I get full control over the voice. This means I can choose the voice that made me laugh the most. (listen to it say 'Clojure')
A GET request to /api/tts?text={text to speak} will return an audio stream of the given text. All audio files are asynchronously preloaded when the page first loads to warm up the serverless functions. Otherwise, there would be a few seconds delay between each phrase. I could have merged the audio files with another lambda function for extra smoothness, but if there ever was a point at which you could over-engineer a Christmas card, then I thought that might be it, so I didn't.

Some words on scaling

So this JAMStack thing looks to be pretty handy for the niche set of websites that are too complex to be a simple static site, but too small for a full SPA, but what happens if my small site goes viral, and for some reason, lots of people want to send terrible e-cards to each other? Surely I'll have to put in loads of DevOps work to make a better solution, right?
Well, no, not really. Actually, because we're using Vercel (other hosting solutions are available), we effectively have a full team of DevOps people working on keeping our servers running smoothly for free. And Vercel does not impose any limits on bandwidth or build minutes (though there are other limits to be aware of), so theoretically there is nothing stopping millions of people from accessing the cards they rightly deserve. However, that doesn't mean there aren't some drawbacks.
Language choice is quite limited. Netlify offers a choice of either Node.js or golang. Vercel also offers those two in addition to Python and Ruby. So if you want something else, like Java (or Clojure/Kotlin/Scala etc.), then you'll have to do things manually with something more bare metal like AWS Lambda (which is what I believe Vercel and friends use under the hood).
Also, as mentioned earlier, serverless functions need to be quick to start. They also generally need to have low memory usage and small response sizes (Vercel's limit is 5MB, so a server-generated video message might be tricky).
But because Vercel (or Netlify/AWS amplify/Cloudflare pages etc.) can execute our 'serverless' functions across all of the machines they have available to them (likely a lot); we don't really need to worry about scaling at all!
Just because servers are cheap and clients are generally fast with lots of bandwidth doesn't mean we should forget about efficiency and continue pushing out bloated SPAs with dedicated servers that sit idle for large amounts of time. It's for this reason (along with effortless deployments with minimal, if any, running costs) that I see this JAMStack thing taking over. At least for smaller projects where the developer time savings alone are significant. I probably wouldn't have bothered making my card app if I knew I had to spend hours deploying and maintaining it, so clearly, the world is already a better place thanks to the JAMStack!

But what about Clojure???

Yes yes I know, I am supposed to be a Clojure developer, I work for a company that only takes on Clojure projects, what am I doing messing around with (shudder) javascript.
Well, the fact is that ClojureScript doesn't work too well with Next.js, Thomas Heller (of shadow-cljs fame) experimented with it a couple of years back but didn't seem too enthusiastic about it. The alternative would have been to do things manually and deploy a standard CLJS project and some serverless functions. Though that would also have taken more time to set up, and I would be stuck with the previously mentioned issues of large bundle size and slow serverless functions.
I think the way to go would be to build a Clojure JAMStack compatible framework from the ground up. However, having been part of building frameworks before (albeit not exactly on the front lines), getting something familiar to the average ClojureScript developer (e.g. Reagent) with seamless (or at least Next.js level 'seamless') SSR and 'magic' serverless function generation is not going to be an easy task. Others have tried before me and seem to have abandoned the attempt. This doesn't mean that it's a bad idea though, just that it will take some dedication to succeed.
Some thoughts that come to mind are:
  • Do we need React? If we don't, we can mainly rely on SSR and add client-side stuff to the bundle to be injected later, but we would lose a lot of tooling and libraries which would need replacing.
  • Is there a way to get Java to start up quickly? Clojure apps have a bad reputation for start-up time but this is mainly due to the loading of big namespaces. There might be a way to aggressively lazy load things to mitigate this.
  • Alternatively, we could use graalvm. This excellent blog post shows promising results, and graalvm support has certainly improved since it was written (see babashka or firn for examples of what you can achieve with it)
  • Running CLJS server-side is an option too, with something like Lumo or Macchiato
But for now, with a sprinkle of immutable.js and some willpower, I can make do without.

Conclusion

Β 
So it seems I've written quite a lot, and yet somehow managed to not really say anything. If you've made it this far then congratulations. You've earned a card. Let me know in the comment section if you have any questions or feedback. What? There is no comment section? Oh. Well, I guess that's my next project... I'll need some sort of database for that, but don't worry, I have a plan for how I can do that in a JAMStack (and 0 money) way too!
Β 
P.S.
(defn count-TLAs
  [content]
  (count (re-seq #"\b[A-Z]{3}\b" content)))

(count-TLAs blog-content)
;; => 36

0 comments