Coding with Jesse

Introduction to WebGL and shaders

I recently worked on a project where I needed to use WebGL. I was trying to render many thousands of polygons on a map in the browser, but GeoJSON turned out to be way too slow. To speed things up, I wanted to get down to the lowest level possible, and actually write code that would run directly on the GPU, using WebGL and shaders. I'd always wanted to learn about shaders, but never had the chance, so this was a great opportunity to learn something new while solving a very specific technical challenge.

At first, it was quite the struggle to figure out what I needed to do. Copying and pasting example code often didn't work, and I didn't really get how to go from the examples to the custom solution I needed. However, once I fully understood how it all fit together, it suddenly clicked in my head, and the solution turned out to be surprisingly easy. The hardest part was wrapping my head around some of the concepts. So, I wanted to write up an article explaining what I'd learned, to help you understand those concepts, and hopefully make it easier for you to write your first shader.

In this article, we'll look at how to render an image to the page with over 150 lines of code! Silly, I know, considering we can just use an <img> tag and be done with it. But doing this is a good exercise because it forces us to introduce a lot of important WebGL concepts.

Here's what we'll do in this article:

  1. We'll write two shader programs, to tell the GPU how to turn a list of coordinates into coloured triangles on the screen.

  2. We'll pass the shaders a list of coordinates to tell it where to draw the triangles the screen.

  3. We'll create an "image texture", uploading an image into the GPU so it can paint it onto the triangles.

  4. We'll give the shader a different list of coordinates so it knows which image pixels go inside each triangle.

Hopefully you can use these concepts as a starting point to doing something really cool and useful with WebGL.

Even if you end up using a library to help you with your WebGL code, I find understanding the raw API calls behind the scenes to be useful to know what is actually going on, especially if things go wrong.

Getting started with WebGL

To use WebGL in the browser, you'll need to add a <canvas> tag to the page. With a canvas, you can either draw using the 2D Canvas API, or you can choose to use the 3D WebGL API, either version 1 or 2. (I don't actually understand the difference between WebGL 1 and 2, but I'd like to learn more about that some day. The code and concepts I'll discuss here apply to both versions though.)

If you want your canvas to fill the viewport, you can start with this simple HTML:

<!doctype html>
<html lang="en">
    <meta charset="UTF-8">
    <title>WebGL</title>
    <style>
        html, body, canvas {
            width: 100%;
            height: 100%;
            border: 0;
            padding: 0;
            margin: 0;
            position: absolute;
        }
    </style>
    <body>
        <canvas></canvas>
        <script></script>
    </body>
</html>

That will give you a blank, white, useless page. You'll need some JavaScript to bring it to life. Inside the <script> tag, add these lines to get access to the WebGL API for the canvas:

const canvas = document.querySelector('canvas');
const gl = canvas.getContext('webgl');

Writing your first WebGL shader program

WebGL is based on OpenGL, and uses the same shader language. That's right, shader programs are written in a language of their own, GLSL, which stands for Graphics Library Shader Language.

GLSL reminds me of C or JavaScript, but it has its own quirks and is very limited but also very powerful. The cool thing about it is, it runs right on the GPU instead of in a CPU. So it can do things very quickly that normal CPU programs can't do. It's optimized for dealing with math operations using vectors and matrices. If you remember your matrix math from algebra class, good for you! If you don't, that's ok! You won't need it for this article anyway.

There are two types of shaders we'll need: vertex shaders and fragment shaders. Vertex shaders can do calculations to figure out where each vertex (corner of a triangle) goes. Fragment shaders figure out how to color each fragment (pixel) inside a triangle.

These two shaders are similar, but do different things at different times. The vertex shader runs first, to figure out where each triangle goes, and then it can pass some information along to the fragment shader, so the fragment shader can figure out how to paint each triangle.

Hello, world of vertex shaders!

Here's a basic vertex shader that will take in a vector with an x,y coordinate. A vector is basically just an array with a fixed length. A vec2 is an array with 2 numbers, and a vec4 is an array with 4 numbers. So, this program will take a global "attribute" variable, a vec2 called "points" (which is a name I made up).

It will then tell the GPU that that's exactly where the vertex will go by assigning it to another global variable built into GLSL called gl_Position.

It will run for each pair of coordinates, for each corner of each triangle, and points will have a different x,y value each time. You'll see how we define and pass those coordinates later on.

Here's our first "Hello, world!" vertex shader program:

attribute vec2 points;

void main(void) {
    gl_Position = vec4(points, 0.0, 1.0);
}

No calculation was involved here, except we needed to turn the vec2 into a vec4. The first two numbers are x and y, the third is z, which we'll just set to 0.0 because we're drawing a 2-dimensional picture and we don't need to worry about the third dimension. (I don't know what the fourth value is, but we just set it to 1.0. From what I've read, I think it has something to do with making matrix math easier.)

I like that in GLSL, vectors are a basic data type, and you can easily create vectors using other vectors. We could have written the line above like this:

gl_Position = vec4(points[0], points[1], 0.0, 1.0);

but instead, we were able to use a shortcut and just pass the vec2 points in as a the first argument, and GLSL figured out what to do. It reminds me of using the spread operator in JavaScript:

// javascript
gl_Position = [...points, 0.0, 1.0];

So if one of our triangle corners had an x of 0.2 and a y of 0.3, our code would effectively be doing this:

gl_Position = vec4(0.2, 0.3, 0.0, 1.0);

but we can't just hardcode the x and y coordinates into our program like this, or all the triangles would just be a single point on the screen. We use the attribute vector instead so that each corner (or vertex) can be in a different place.

Colouring our triangles with a fragment shader

While vertex shaders run once for each corner of each triangle, fragment shaders run once for each coloured pixel inside each triangle.

Whereas vertex shaders define the position of each vertex using a global vec4 variable called gl_Position, fragment shaders work by defining the colour of each pixel with a different global vec4 variable called gl_FragColor. Here's how we can fill all our triangles with red pixels:

void main() {
    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

The vector for a colour here is RGBA, so a number between 0 and 1 for each of red, green, blue and alpha. So the example above just sets each fragment or pixel to bright red with full opacity.

Accessing an image inside your shaders

You wouldn't normally fill all your triangles with the same solid colour, so instead, we want the fragment shader to reference an image (or "texture") and pull out the right colour for each pixel inside our triangles.

We need to access both the texture with the color information, as well as some "texture coordinates" that tell us how the image maps onto the shapes.

First, we'll modify the vertex shader to access the coordinates and pass them on to the fragment shader:

attribute vec2 points;
attribute vec2 texture_coordinate;

varying highp vec2 v_texture_coordinate;

void main(void) {
    gl_Position = vec4(points, 0.0, 1.0);
    v_texture_coordinate = texture_coordinate;
}

If you're like me, you're probably worried there will be all sorts of crazy trigonometry needed, but don't worry - it turns out to be the easiest part, thanks to the magic of the GPU.

We take in a single texture coordinate for each vertex, but then we pass it on to the fragment shader in a varying variable, which will "interpolate" the coordinates for each fragment or pixel. This is essentially a percentage along both dimensions, so that for any particular pixel inside the triangle, we'll know exactly which pixel of the image to choose.

The image is stored in a 2-dimensional sampler variable called sampler. We receive the varying texture coordinate from the vertex shader, and use a GLSL function called texture2D to sample the appropriate single pixel from our texture.

It sounds complex but turns out to be super easy thanks to the magic of the GPU. The only part where we need to do any math is associating each vertex coordinate of our triangles with the coordinates of our image, and we'll see later that it turns out to be pretty easy.

precision highp float;
varying highp vec2 v_texture_coordinate;
uniform sampler2D sampler;

void main() {
    gl_FragColor = texture2D(sampler, v_texture_coordinate);
}

Compiling a program with two shaders

We've just looked at how to write two different shaders using GLSL, but we haven't talked about how you would even do that within JavaScript. You simply need to get these GLSL shaders into JavaScript strings, and then we can use the WebGL API to compile them and put them on the GPU.

Some people like to put shader source code directly in the HTML using script tags like <script type="x-shader/x-vertex">, and then pull out the code using innerText. You could also put the shaders into separate text files and load them with fetch. Whatever works for you.

I find it easiest to just write the shader source code directly in my JavaScript with template strings. Here's what that looks like:

const vertexShaderSource = `
    attribute vec2 points;
    attribute vec2 texture_coordinate;

    varying highp vec2 v_texture_coordinate;

    void main(void) {
        gl_Position = vec4(points, 0.0, 1.0);
        v_texture_coordinate = texture_coordinate;
    }
`;

const fragmentShaderSource = `
    precision highp float;
    varying highp vec2 v_texture_coordinate;
    uniform sampler2D sampler;

    void main() {
        gl_FragColor = texture2D(sampler, v_texture_coordinate);
    }
`;

Next, we need to create a GL "program" and add those two different shaders to it like this:

// create a program (which we'll access later)
const program = gl.createProgram();

// create a new vertex shader and a fragment shader
const vertexShader = gl.createShader(gl.VERTEX_SHADER);
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);

// specify the source code for the shaders using those strings
gl.shaderSource(vertexShader, vertexShaderSource);
gl.shaderSource(fragmentShader, fragmentShaderSource);

// compile the shaders
gl.compileShader(vertexShader);
gl.compileShader(fragmentShader);

// attach the two shaders to the program
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);

Lastly, we have to tell GL to link and use the program we just created. Note, you can only use one program at a time:

gl.linkProgram(program);
gl.useProgram(program);

If something went wrong with our program, we should log the error to the console. Otherwise, it will silently fail:

if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
    console.error(gl.getProgramInfoLog(program));
}

As you can see, the WebGL API is very verbose. But if you look through these lines carefully, you'll see they're not doing anything too surprising. These chunks of code are perfect for copying and pasting, because it's hard to memorize them and they rarely change. The only part you might need to change is the shader source code in the template strings.

Drawing triangles

Now that we have our program all wired up, it's time to feed it some coordinates and get it to draw some triangles on the screen!

First, we need to understand the default coordinate system for WebGL. It's quite different from your regular pixel coordinate system on the screen. In WebGL, the center of the canvas is 0,0, top-left is -1,-1, and bottom right is 1,1.

If we want to render a photograph, we need to have a rectangle. But WebGL only knows how to draw triangles. So how do we draw a rectangle using triangles? We can use two triangles to create a rectangle. We'll have one triangle cover the top left corner, and another in the bottom right, like this:

WebGL coordinate system

To draw triangles, we'll need to specify where the coordinates of the three corners of each triangle are. Let's create an array of numbers. Both the x & y coordinates of both triangles will all be in a single array, like this:

const points = [
    // first triangle
    // top left
    -1, -1,

    // top right
    1, -1,

    // bottom left
    -1, 1,

    // second triangle
    // bottom right
    1, 1,

    // top right
    1, -1,

    // bottom left
    -1, 1,
];

To pass a list of numbers into our shader program, we have to create a "buffer", then load an array into the buffer, then tell WebGL to use the data from the buffer for the attribute in our shader program.

We can't just load a JavaScript array into the GPU, it has to be strictly typed. So we wrap it in a Float32Array. We could also use integers or whatever type makes sense for our data, but for coordinates, floats make the most sense.

// create a buffer
const pointsBuffer = gl.createBuffer();

// activate the buffer, and specify that it contains an array
gl.bindBuffer(gl.ARRAY_BUFFER, pointsBuffer);

// upload the points array to the active buffer
// gl.STATIC_DRAW tells the GPU this data won't change
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(points), gl.STATIC_DRAW);

Remember, I made an attribute called "points" at the top of our shader program, with the line attribute vec2 points;? Now that our data is in the buffer, and the buffer is active, we can fill that "points" attribute with the coordinates we need:

// get the location of our "points" attribute in our shader program
const pointsLocation = gl.getAttribLocation(program, 'points');

// pull out pairs of float numbers from the active buffer
// each pair is a vertex that will be available in our vertex shader
gl.vertexAttribPointer(pointsLocation, 2, gl.FLOAT, false, 0, 0);

// enable the attribute in the program
gl.enableVertexAttribArray(pointsLocation);

Loading an image into a texture

In WebGL, textures are a way to provide a bunch of data in a grid that can be used to paint pixels onto shapes. Images are an obvious example, they are a grid of red, blue, green & alpha values along rows and columns. But, you can use textures for things that aren't images at all. Like all information in a computer, it ends up being nothing other than lists of numbers.

Since we're in the browser, we can use regular JavaScript code to load an image. Once the image has loaded, we'll use it to fill the texture.

It's probably easiest to load the image first before we do any WebGL code, and then run the whole WebGL initialization stuff after the image has loaded, so we don't need to wait on anything, like this:

const img = new Image();
img.src = 'photo.jpg';
img.onload = () => {
    // assume this runs all the code we've been writing so far
    initializeWebGLStuff();
};

Now that our image has loaded, we can create a texture and upload the image data into it.

// create a new texture
const texture = gl.createTexture();

// specify that our texture is 2-dimensional
gl.bindTexture(gl.TEXTURE_2D, texture);

// upload the 2D image (img) and specify that it contains RGBA data
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);

Since our image is probably not a square with power-of-two dimensions, we also have to tell WebGL how to choose which pixels to draw when enlarging or shrinking our image, otherwise it will throw an error.

// tell WebGL how to choose pixels when drawing our non-square image
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

// bind this texture to texture #0
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);

Lastly, we want to access this texture in our shader program. We defined a 2-dimensional uniform sampler variable with the line uniform sampler2D sampler;, so let's tell the GPU that our new texture should be used for that.

// use the texture for the uniform in our program called "sampler",
gl.uniform1i(gl.getUniformLocation(program, 'sampler'), 0);

Painting triangles with an image using texture coordinates

We're almost done! The next step is very important. We need to tell our shaders how and where our image should be painted onto our triangles. We want the top left corner of our image to be painted in the top left corner of our top left triangle. And so on.

Image textures have a different coordinate system than our triangles used, so we have to think a little bit about this, and can't just use the exactly same coordinates unfortunately. Here's how they differ:

Comparing WebGL coordinates with texture coordinates

The texture coordinates should be in the exact same order as our triangle vertex coordinates, because that's how they will show up together in the vertex shader. As our vertex shader runs for each vertex, it will also be able to access each texture coordinate, and pass that along to the fragment shader as a varying variable.

We'll use almost the same code we used to upload our array of triangle coordinates, except now we'll be associating it with the attribute called "texture_coordinate".

const textureCoordinates = [
    // first triangle
    // top left
    0, 1,

    // top right
    1, 1,

    // bottom left
    0, 0,

    // second triangle
    // bottom right
    1, 0,

    // top right
    1, 1,

    // bottom left
    0, 0,
];

// same stuff we did earlier, but passing different numbers
const textureCoordinateBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, textureCoordinateBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(textureCoordinates), gl.STATIC_DRAW);

// and associating it with a different attribute
const textureCoordinateLocation = gl.getAttribLocation(program, 'texture_coordinate');
gl.vertexAttribPointer(textureCoordinateLocation, 2, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(textureCoordinateLocation);

Last step, draw some triangles

Now that we have our shaders and all our coordinates and our image loaded in the GPU, we're ready to actually run our shader program and have it draw our image onto the canvas.

To do that, we just need one line of code:

gl.drawArrays(gl.TRIANGLES, 0, 6);

This tells WebGL to draw triangles using both our points array and the texture coordinates array. The number 6 here means that every 6 numbers in our arrays defines one triangle. Each triangle has 3 corners with an x and y coordinate associated with each corner (or vertex).

See the code live on CodePen.io

Just the beginning?

Isn't it amazing how many different things you need to learn to draw an image using the GPU? I found it to be a huge learning curve, but once I wrapped my head around what shaders actually do, what textures are, and how to provide shaders with some lists of numbers, and how it all fits together, it started to make sense and I realised how powerful it all is.

I hope you've been able to get a glimpse of some of that simplicity and power. I know the WebGL API can be very painfully verbose, and I'm still not totally sure what every function does exactly, and It's definitely a new programming paradigm for me, because a GPU is so different from a CPU, but that's what makes it so exciting.

Published on September 15th, 2021. © Jesse Skinner

MySQL: Using a password on the command line interface can be insecure

If you've ever tried to use the MySQL command line tool in some automated bash scripts or cron jobs, you'll probably wonder how to pass your password to MySQL without having to type it in. The most obvious choice would be to use the -p or --password command line arguments. If you've ever tried that, you'll probably have seen this warning message:

mysql: [Warning] Using a password on the command line interface can be insecure.

This is because any other users or programs running on that computer will have the opportunity to see the password, because process command line arguments are publicly available on Linux for some reason.

Today I learned a better way to do this. Actually I learned two ways from this Stack Overflow question:

  1. You can use a configuration file, either in ~/.my.cnf or in a file specified with the --defaults-file command line argument, or
  2. You can use the MYSQL_PWD environment variable.

For me, #2 was the easiest solution, even though it's not the most secure option to use generally. MySQL documentation warns:

This method of specifying your MySQL password must be considered extremely insecure and should not be used. Some versions of ps include an option to display the environment of running processes. On some systems, if you set MYSQL_PWD, your password is exposed to any other user who runs ps. Even on systems without such a version of ps, it is unwise to assume that there are no other methods by which users can examine process environments.

If this sounds scary, go with the configuration file approach. I'm not worried about environment variables on my servers, and this other Stack Overflow thread, Is passing sensitive data through the process environment secure?, explains that this used to be a big concern, but nowadays the MySQL documentation warning is a bit over-the-top. These days, on most Linux systems, environment variables are a safe way to pass around secrets.

Since I was already using environment variables to store my database credentials, I decided to just change my bash script from this:

#!/bin/bash

mysql -h "$MYSQL_HOSTNAME" -u "$MYSQL_USERNAME" "$MYSQL_DATABASE" --password="$MYSQL_PASSWORD" "$@"

to this:

#!/bin/bash

MYSQL_PWD="$MYSQL_PASSWORD" mysql -h "$MYSQL_HOSTNAME" -u "$MYSQL_USERNAME" "$MYSQL_DATABASE" "$@"

Sure, I could've renamed my environment variable from MYSQL_PASSWORD to MYSQL_PWD, but I have other code relying on that, so this was the easiest way to eliminate that warning.

Published on April 7th, 2021. © Jesse Skinner

Why I quit Twitter

Update November 2022: Why I love Mastodon. Follow me at https://toot.cafe/@JesseSkinner

I used to love Twitter. When I signed up in 2008, it was a new place to publicly talk with people from around the world. Over time, more people joined, and it became a great place to hear the latest news and gossip in the web development community. I enjoyed seeing famous people chatting and even roasting each other. It was exciting to hear rumours and see things unfold in real-time that would end up on the mainstream news. I'd spend a lot of time crafting tweets. Tweeting became part of my thought process, so that when something interesting in my life happened, I'd instantly start thinking of how to explain it in 140 characters.

Last year, that novelty wore off. I don't know if it was because of the increasingly divisive political environment in the USA, or the pandemic making everyone more stressed out and less patient, or if it was something that would've happened regardless. Every time I went on there, I'd come away with a bad taste in my mouth, angry or frustrated about one thing or another. I don't mean angry about injustices or world events, where the feeling is justified. I'm talking about being furious that somebody was wrong on the Internet!

Twitter started to remind me of one big comments section for the Internet, with everyone fighting about everything. People who I admired regularly get swept up in this hatred. One person vents their frustration over something, another tries to make a tasteless joke, others jump in to attack the replies, others jump in to defend, and on it goes.

I already stopped using Facebook years ago, and never got into Instagram or other platforms much, so Twitter had become the place I'd go on my phone when I had some spare time or just felt like zoning out for a while. I started to notice a pattern, that I would frequently tell my wife about things unfolding on Twitter, usually some big controversy that had everyone riled up. Whether I agreed or disagreed, I would get swept up in it. She was very patient to let me tell these stories, but would remind me that I don't have to share everything that happened on there.

The problem was, Twitter was taking up more and more of my thoughts and my mental energy. I would find myself thinking about these controversies and tweets in my free time, wondering if I should jump in with my opinion, but more often, I was just emotionally watching these things unfold from the sidelines, scared to be caught in the crossfire.

I finally decided to quit on Christmas morning. After opening presents from Santa and having Christmas breakfast, I habitually and unfortunately pulled out my phone and turned to Twitter for some heart-warming tweets. First thing I saw was a politician wishing everyone a happy holiday, and another politician replying to that with "Bite me". I couldn't believe it. I wanted to write back and call this politician out for being crass, tasteless and mean. But I realised there that then I would also be swept into this hate-fest as well, on a day where I should be focused on my family and having a lovely day. I decided right then that this would be the last time I read my feed on this hateful, toxic platform.

I decided that I don't need Twitter at all. I can't think of any real benefits I'd ever gained from using Twitter. I already have my blog, my newsletter and my YouTube and Twitch channels, where I can share my ideas and express myself. There's no good reason to continue to pour my energy into crafting content that ends up contributing to and supporting a private platform I don't even like.

The past few months have been wonderful. I installed a nice, simple blog reader called "Flym" on my phone, and subscribed to a bunch of web development blogs, some I had read on Google Reader back in the day, others were new to me. It's so much nicer to read longer, well-thought-out blog posts, that leave me feeling inspired, educated, and curious about the topics I'm interested in.

The one thing I'm kind of missing is the ability to write very short content to share a small thing I'm excited about or find funny. I figure that I'm better off to put that creative energy into blogging and writing newsletters more often, even if that means writing shorter posts, though probably more than 140 or 280 characters at a time.

Anyway, I just wanted to get off my chest all the reasons you hopefully won't find me active on Twitter anymore, and let you know that you'll likely find me writing blog posts on here more often.

Here are some other web development bloggers I've been enjoying lately, and I'm eager to find some more. If you also have a blog, I'd love to hear from you so I can subscribe to your RSS feed in my blog reader!

Published on March 21st, 2021. © Jesse Skinner

Lessons learned from my first video course

I've wanted to launch one of my side projects for a very long time. I'm the kind of guy who loves starting things, but never finishes them. Well, this week I finally finished one of them, by launching The Joy of Svelte, my first online video course!

Finding inspiration in an old five-year plan

Back in December, 2019 (a year ago, but feels like a decade), I got a new notebook for Christmas, because I had filled up my old one that I use for meeting notes, To Do lists, and stuff like that. I started re-reading my old one, and saw that near the start, I had a five-year plan from December, 2015. I had a goal to expand beyond my freelancing business and launch my first video course in 2016, with the goal of continuing to create courses, apps and other products over the coming years. By the year 2020, I wanted to have a whole catalog of courses and products under my belt.

Well, when I reading that, four years had passed, and I still hadn't launched anything. Not a single app or course, other than the dozens of free videos I'd recorded for YouTube, and, of course, all the client work I'd done as a freelance web developer. But I still wanted to do something for myself, something of my own creation.

The disappointment and shame I felt while reading this was the push I needed to finally commit to this lifelong plan and stick with one of my side projects long enough to actually see it through to launch. So, in January, 2020, I decided that I would focus on the things I was most excited about: teaching, recording videos, and Svelte.

Trying to stay motivated by committing publically

I had it all figured out. I put up a landing page that said "Coming in the Spring of 2020", and publically announced that I was working on a new course. I committed myself to my newsletter subscribers and Twitter followers, I put out an announcement on YouTube, and then hoped that all the public accountability would force me to follow through and finally launch something.

Well, that was all back in February. Spring came and went, and I was still stuck planning and trying to decide on the course contents. In June, I had to update the landing back and change it to "Coming Summer 2020..."

Coding as a form of procrastination

I decided early on to self-publish The Joy of Svelte by developing my own video course platform. I'm a web developer, after all, and it's way too easy to feel like I'm being productive when I'm writing code. So in a way, it was a form of procrastination, because I could sit down to integrate Stripe, or create a video player interface, or write code to deal with emailing out access links, and feel like I was making progress. In reality, I could've just used one of the many video course platforms available and saved myself a lot of time and effort.

Having now built all that out, I'm happy I did, because now I can self-publish more courses in the future. But I realise now that I could have launched a lot sooner if I had focused on recording videos and spent less time writing code.

Pivoting to focus on learning objectives

I started to record some videos, with the idea of making an SVG drawing app using Svelte. I recorded three videos showing how to do this, until I got to a point where it was starting to be more about SVG particulars and less about Svelte.

Eventually, I came across some very useful advice about creating course content: identify what it is specifically you want people to learn, then go and teach those things. I know that seems super obvious, but somehow I'd lost track of that and was instead accidently trying to make a course teaching people to make an SVG drawing app, but I don't think many web developers have the need to make SVG drawing web apps.

I looked at the landing page I'd originally made, and saw that I'd already outlined some key topics that I was planning to include:

You will learn about:

  • getting started with Svelte
  • using templating syntax to render data
  • data fetching strategies for components
  • using components to simplify complex web user interfaces
  • building custom stores for state management
  • integrating Svelte into an existing web application
  • ...and more!

I decided to make six new videos, each one focused on one of these learning objectives. It was a simple, straightforward approach that ended up working very well, because it kept me focused on what it is I wanted people to learn, and less on what cool thing I wanted to build as a code example.

Back to the drawing board

So I abandoned the SVG drawing app videos, and started from scratch. I looked for some simple free web APIs and found one for Nobel Prizes, and decided that I'd use that to show people how to fetch data from an API. It needed very little explanation, didn't introduce any new, unrelated concepts, and more closely resembled the kind of API that I'd often used to build web interfaces for my clients. It might not be super fancy or flashy, but it allowed me to focus on Svelte instead, which is what mattered.

Off screen, I sat down and built a UI for browsing, search and filtering Nobel Prizes, to see if that would work well for the videos, and it turned out to be perfect. It gave me lots of different opportunities to demonstrate various Svelte features, and plenty of ways to show off what makes Svelte a joy to work with, and all the different strategies for making clean, reusable web components using Svelte. None of it felt contrived, all of it was applicable to real world web applications. I was ready to start recording.

Early access pre-launch and a final push to finish

Summer 2020 was coming to an end, and I did not want to change the release date on the landing page again. So, in one day, I sat down and recorded three of the six videos. I uploaded them to YouTube as unlisted videos, and on the very last day of summer, I sent out an email to my newsletter subscribers announcing that Early Access was now available.

It felt so good when I had my first sale ever! And then another one came! And then, while I was sleeping, another one! People were actually willing to pay me for my videos! This was a huge milestone in my life and career, and really validated all the work I'd put into it.

Still, I had three more videos to record to finish it up.

Benefiting from my own misfortune

Then something horrible happened. I recorded two more videos in one day, but when I finished, it turned out that OBS had used the wrong microphone, and so the audio was total garbage. I had to painfully decide to throw those videos out and re-record them.

Actually, that turned out to be beneficial, because I wasn't totally happy with some of my examples, and ended up coming with better examples that demonstrated the strengths and weaknesses of the different types of Svelte stores before I re-recorded the videos.

Launch day, and being too early

Three months after my Early Access launch, I had finally finished all the videos, and was ready to put the finishing touches on my web site, so that people could get a private link to watch videos directly on joyofsvelte.com instead of on YouTube. Using unlisted YouTube videos had felt a bit unprofessional, although I don't think anybody would have complained if I had stuck with using them.

Finally, on Monday, December 14th, 2020, I launched my first video course ever! I created a promotional video, and posted it with an emoji-filled tweet to Twitter.

On launch day I had two sales, and woke up the next morning to a third sale. I had tempered my expectations so that I wouldn't be disappointed, and so I was actually pleasantly surprised to make any sales that day. I'd figured most people who were excited about the course would have bought it during Early Access, and that turned out to be mostly true.

I also have come to realise that I'm probably way too early to be launching a course about Svelte. I chose Svelte because I'm so excited about it, and am happy to talk about Svelte endlessly, but the fact is, Svelte is not yet widely adopted amongst web developers, so there really isn't a huge audience there yet. It's still somewhat a niche topic. And that's okay, but it means that there was no way I was going to have a ton of sales on the first day.

There just aren't that many people learning Svelte right now. I think this will change over the coming months and years, and I'm glad to have put this course out into the wild to help people looking to learn Svelte. I hope it helps people to see what it is about Svelte that I find exciting, and why it has changed the way I approach web development altogether.

Lessons learned for the next course

This won't be my last course, it's just the beginning. Here are some lessons I've learned from building this course, that will change the way I approach building my next video courses.

  1. I'll focus on learning objectives from the start. I'll make a short list of what I think people will want to learn about, and make videos focused on those points. The code examples I use will be chosen for how well they can demonstrate those key learning objectives.

  2. I'll avoid perfectionism, and limit how much time I spend planning the course up front. Planning is a trap that I fell into, because you can keep planning the same thing forever. At some point you have to say "good enough" and start doing the actual work. Chances are, when you actually start recording the videos, you'll find out the best way to do things.

  3. I ended up re-recording a lot of The Joy of Svelte by accident, and that benefitted me by allowing me to improve the content before recording the final videos. I will do this on purpose next time, maybe live streaming the content on Twitch, or possibly running a workshop beforehand, so hopefully I can get some useful feedback first as well. (And I'll try to remember to double check my microphone before recording the final videos!)

On to the next side project

One of the best things about launching The Joy of Svelte, is that I can now start working on all the other side projects and ideas I came up with this year, but wouldn't allow myself to work on until the course launched. If you're interested in following along, you can sign up for my newsletter.

And, of course, if you're interested in learning Svelte, check out The Joy of Svelte!

Published on December 16th, 2020. © Jesse Skinner

Sapper is dead! What's next in Svelte?

In case you missed it, Rich Harris gave a presentation at Svelte Summit 2020, where he announced that Sapper v1 will never be released! Instead, he showed what's coming next in Svelte itself.

Be aware that at the time of me writing this blog post, none of this is officially released yet, and very likely will change in the near future. Nonetheless, it's exciting to see a sneak preview of what the future of Svelte will look like.

Getting started

To get started today, you can run this command in your terminal, assuming you have npm installed: npm init svelte@next

In the future, it'll probably just be npm init svelte, which is super clean and easy to remember. This will be a nice change from having to run npx degit svelte/template my-template.

Here's what you'll see if you run this command today:

█████████  ███████████    ███████    ███████████  ███
███░░░░░███░█░░░███░░░█  ███░░░░░███ ░░███░░░░░███░███
░███    ░░░ ░   ░███  ░  ███     ░░███ ░███    ░███░███
░░█████████     ░███    ░███      ░███ ░██████████ ░███
░░░░░░░░███    ░███    ░███      ░███ ░███░░░░░░  ░███
███    ░███    ░███    ░░███     ███  ░███        ░░░
░░█████████     █████    ░░░███████░   █████        ███
░░░░░░░░░     ░░░░░       ░░░░░░░    ░░░░░        ░░░

Pump the brakes! A little disclaimer...

svelte@next is not ready for use yet. It definitely can't
run your apps, and it might not run at all.

We haven't yet started accepting community contributions,
and we don't need people to start raising issues yet.

Given these warnings, please feel free to experiment, but
you're on your own for now. We'll have something to show
soon.

It'll go on to ask you if you want to use TypeScript, which is really nice for those who like to use TypeScript, and nice that it's optional for those who don't.

Here's the full directory structure you will get with an initial installation:

├── .gitignore
├── package.json
├── README.md
├── snowpack.config.js
├── src
│   ├── app.html
│   ├── components
│   │   └── Counter.svelte
│   └── routes
│       └── index.svelte
├── static
│   ├── favicon.ico
│   └── robots.txt
└── svelte.config.js

Starting the dev server

Once it's done setting up files, you need to run npm install and then npm run dev to spin up the dev server. Here's what you'll see:

snowpack

  http://localhost:3001 • http://10.0.0.180:3001
  Server started in 643ms.

▼ Console

[snowpack] installing dependencies...
[snowpack] ✔ install complete! [0.59s]
[snowpack] 
  ⦿ web_modules/                                size       gzip       brotli   
    ├─ svelte-hmr/runtime/hot-api-esm.js        22.08 KB   7.4 KB     6.29 KB    
    ├─ svelte-hmr/runtime/proxy-adapter-dom.js  5.17 KB    1.65 KB    1.38 KB    
    ├─ svelte.js                                0.18 KB    0.15 KB    0.11 KB    
    ├─ svelte/internal.js                       52.36 KB   13.16 KB   11.36 KB   
    └─ svelte/store.js                          3.3 KB     1 KB       0.88 KB    


[snowpack] > Listening on http://localhost:3000

What is happening under the hood? This is very different from the Svelte and Sapper templates that came before. There is no longer a rollup.config.js nor a webpack.config.js, because it does not use Rollup nor Webpack, at least not during development.

Instead, it uses Snowpack to handle compiling and serving client-side resources. Snowpack does not bundle your resources, and relies heavily on JavaScript’s native module system, which means development is much faster. There is even a snowpack.config.js file which gives you a place to configure Snowpack to some degree:

// Consult https://www.snowpack.dev to learn about these options
module.exports = {
    extends: '@sveltejs/snowpack-config'
};

Building your application

There is now also a new svelte.config.js file, which lets you define an "adapter", used with npm run build to build your application into a production web site:

module.exports = {
    // By default, `npm run build` will create a standard Node app.
    // You can create optimized builds for different platforms by
    // specifying a different adapter
    adapter: '@sveltejs/adapter-node'
};

The default adapter will use Rollup to build your site into a Node.js web server. It seems that this web server doesn't use Express.js, though that might change as well, or maybe there will be a special adapter for Express.

If you want to have a purely static export, you can currently replace @sveltejs/adapter-node with @sveltejs/adapter-static, but be sure to run npm install @sveltejs/adapter-static as well.

In the future, there will be many other adapters, for example, building specifically for certain web hosting platforms, serverless architectures, and who knows what else? The cool thing about this adapter approach, is that you can build your web site without necessarily knowing how it will be built or deployed. You'll be able to change the adapter without changing your code.

Dependencies

Let's have a look at the package.json:

{
    "name": "demo",
    "version": "0.0.1",
    "scripts": {
        "dev": "svelte dev",
        "build": "svelte build"
    },
    "devDependencies": {
        "@sveltejs/adapter-node": "0.0.12",
        "@sveltejs/kit": "0.0.23",
        "@sveltejs/snowpack-config": "0.0.4",
        "svelte": "^3.29.0"
    }
}

Note that there are very few dependencies here. I really like how minimal this is. Both of the scripts are using the new svelte CLI from @sveltejs/kit, though that name might change, and it's not even available on GitHub yet. For now, you can look at the npm package.

Routes

You'll notice a folder src/routes/ where you can define your routes similar to how Sapper (or Next.js, etc.) let you define routes. Basically, your folder and file structure in here will map one-to-one with the routes on your web site. This is really nice, and easy to work with, especially if you're used to using PHP or other similar web development platforms.

If you're not building a static-only web site, you can also define server-side routes, similar to what you can do with Sapper. For example, you can create a file at src/routes/api.js:

export async function get(req) {
    return {
        status: 200,
        body: {
            hello: 'world'
        }
    }
}

If you're familiar with Sapper, you might notice that you have to return an object with status and body properties, instead of using an Express res object for your response. This is because it is not Express middleware. It uses an internal Node web server, with an API similar to what you may have used with some serverless cloud functions.

To create a layout component, to provide a consistent header and footer wrapped around all your routes, you can create a file called $layout.svelte, similar to Sapper's _layout.svelte.

You can also make an error handler route called $error.svelte, to handle 404s and other programming errors. It receives a status prop and also an error prop, so you can decide how to display the error to your users.

Migrating

Rich Harris notes that migrating from Sapper or other similar frameworks should be fairly straightforward, since most of the folder structure and other concepts are pretty similar. You'll probably just need to rename some files, and change how your server-side routes work, because they will no longer be written as Express middleware.

For fetching data for both server-side and client-side rendering, the Sapper approach of having a <script context="module"> block currently still works, though it's possible that will change.

Conclusion

If you're excited about all this stuff, it's definitely too early to start building your applications using it, but I'm willing to bet that it'll be a good choice to get started by using Sapper today, with the expectation that it'll be easy enough to migrate to this in the future, once it's ready.

To see a demo, check out Rich Harris' video Futuristic Web Development

If you're interested in learning more about Svelte, check out my video course The Joy of Svelte.

Published on October 28th, 2020. © Jesse Skinner
<< older posts newer posts >> All posts