Coding with Jesse

What I learned from wearing a Continuous Glucose Monitor for two weeks

FreeStyle Libre 2 sensor in an arm

A couple of weeks ago, I bought one of those Continuous Glucose Monitors (CGM). My nutritionist suggested I do this, even though I'm not diabetic or pre-diabetic, just to learn something about myself and how food affects me. I'm always excited about new technology and gadgets, so of course I went out and bought one right away.

I bought the FreeStyle Libre 2 monitor, the only brand available in Canada (as far as I know). It cost $119 CAD, and I was able to buy it from the pharmacy without a prescription.

The monitor itself is a small thin disc with a tiny needle in the middle. You install it into the back of your arm with an applicator. I was nervous, but it never hurt at all. It has a strong adhesive, like a bandage, so there's very little chance of it falling out accidentally. It stayed put for the full 14 days, though I felt I had to be a bit careful with it.

Once installed, you can sync and activate it with your smartphone, by installing the app. The monitor syncs to your phone with NFC, so basically you can tap it as often as you want to get your current blood sugar. You'll also see a line chart of what happened through the day on a minute-by-minute basis.

After having kept a close eye on it for the last two weeks, I now have a way better understanding of what my body feels like when the glucose changes, and how to keep it level. Here are a few things I learned from it.

Note: I'm fairly healthy and don't have diabetes or pre-diabetes, and these are just hunches, so may not be true or apply to you. Factors like sleep, exercise, hormones, and your overall health can have a big influence on your blood sugar. The only way to learn about your own body is to try this yourself. Still, I'm hopeful some of this will be at least interesting to some of you.

  1. Eating a sugary or high carb meal (pizza, oatmeal, half a bag of doritos) makes my glucose quickly spike up high and then drop sharply, dropping lower than my average. My understanding is this is because insulin was released, which allows the glucose to move from the blood into the cells, and excess glucose is converted into fat.

  2. Right after a high spike, I often drop too low and feel like snacking shortly after eating even though my body just had an excess of energy. So I'm probably storing that glucose as fat, and then end up eating more than I need. The takeaway here is that I'm going to try to limit half my meal to carbs, and add in a salad or some protein to balance things out.

  3. Foods with a balance of carbs, fat, protein and fibre will make the glucose go up slower, not as high, stay up longer, and come down slower, ending up closer to my average, so I'm not hungry afterwards. By limiting the level of refined carbs or sugar in a meal, there's a better chance I'll feel fuller for longer, and be able to go longer without snacking afterwards.

  4. Whenever I'm feeling hungry, I've probably dropped below 5.0 mmol/L. It's interesting how clearly blood sugar coincides with the desire to eat.

  5. Whenever I'm feeling starving or woozy, I've probably dropped below 4.0 mmol/L. This happened a few times when I exercised about an hour after eating a high-carb meal. While I'm crashing from the spike, my insulin and physical activity are both drawing down glucose quickly at the same time, and it ends up going too low. I now avoid eating a high carb or sugary meal (eg. oatmeal) before exercising.

  6. I never felt any of my sugar spikes. The highest I got to was 11 mmol/L after eating pizza. Usually my spikes are around 9 mmol/L. Hyperglycemia starts around 10 mmol/L. Hyperglycemia can damage cells and increase insulin resistance, and over time can contribute to developing long-term health problems such as cardiovascular disease and type-2 diabetes.

  7. Meals with fewer carbs might barely go up at all. Even a single slice of bread or pizza, or a handful of chips seems to have very little impact. This shows how important portion control is. By avoiding carbs altogether, it was possible for me to stay basically flat all day, though I don't know how necessary or healthy that is.

  8. It's really interesting how a lot of advice I hear all the time now makes more sense in a tangible way. Eat smaller meals (shorter spike). Try intermittent fasting (more time between spikes). Avoid sugar, especially sugary drinks (very sharp spike). Eat light before exercising (so you're not crashing after a spike and dip into danger zone). Etc.

  9. I now suspect that a lot of times over the years when I've felt really off without explanation were due to crashing below 4 after a spike.

  10. As a test one night, I ate a tremendous amount of pasta for dinner. Surprisingly, this didn't make me spike or crash sharply! But what it seemed to do was keep me at a rather high baseline. Somehow, I was at the highest hours later, right before bed (8.3 mmol/L). When I woke up the next morning it was still quite high (6.6 mmol/L, in the pre-diabetic range for a fasting glucose)! This was just a one-off so there could have been other factors at play (exercise, stress, etc.) but I feel like it was probably mostly the pasta.

Overall, I was really glad I tried this out. I think it's absolutely worth the cost, because it has given me tangible first-hand experience with a lot of things I already knew in the abstract. I see it as a one-time educational thing that'll ideally pay benefits over the next decades. I believe it will help me make better choices and hopefully avoid problems with my health over the long run.

I might try it again down the road to see how I've changed. Or maybe I'll get one of those finger prick glucose monitors to spot check when I'm feeling strange.

The biggest change I've made after all this: I don't put sugar in my coffee anymore. This was adding two sharp glucose spikes early in my day every day, so cutting those out was a quick win. Now, I tend to stay relatively flat through the day with a moderate, slower increase after dinner.


Here are two line graphs, one from my best day, and the other from my worst day (the first spike was oatmeal, and the dinner spike is from pizza, and apparently it even broke the monitor!):

My blood sugar charts from two days

For comparison, here are two sample days sent to me from RevK who is a diabetic:

My blood sugar charts from two days
Published on April 2nd, 2023. © Jesse Skinner

Deploying a static site to Cloudflare Pages

A stack of paper in a field blowing away in the wind

I moved codingwithjesse.com to Cloudflare Pages this week!

I was having some intermittent outages on my website in the hours before I wanted to publish my last blog post. Since I wanted to publish immediately, but didn't want to send people to a broken site, I decided I'd finally try out hosting my website on Cloudflare Pages. It went so smoothly, I was able to get it working in under an hour and publish the blog post!

First of all, you should know that Cloudflare Pages works easiest with a static website. It's very possible to run websites with server-side code using Cloudflare Workers, but my website is static, so I didn't need to worry about that. (I plan to move some other sites to Cloudflare Pages and Workers later, so I'll probably do a write-up of that when I do!)

At the time of writing this, Cloudflare Pages is free, with unlimited sites, unlimited requests, and unlimited bandwidth, with a limit of 500 builds per month, but you should double check the pricing page as it may change in the future.

Setting up Cloudflare Pages in an emergency

I didn't have time to figure out the best way to use Cloudflare Pages, and wasn't totally sure I'd want to stick with it, so I did the easiest possible thing I could find.

All I wanted to do was somehow upload a zip file of my website and have it be hosted on there. I didn't know if that was possible, but I was eager to figure it out. Here's how I pulled it off in record time.

  1. I logged on to the Cloudflare dashboard and clicked Pages in the side nav. I clicked the "Create a project" button, and chose the "Direct upload" option. Perfect!

  2. Cloudflare asked me to create a name for my project. I chose "codingwithjesse" and clicked "Create Project".

  3. I clicked "select from computer" and chose "Upload zip" to browse to my zip file and upload it. Easy!

  4. After a while (I have 600+ pages on my site, and it took a few minutes), it was ready, and I could click "Deploy site". Success!

  5. I was able to see my new site at codingwithjesse.pages.dev and verify that everything looked good. I did have to wait a few minutes for the DNS to propagate and the subdomain to show up, but when it did, it looked perfect.

  6. Returning to the newly created project, I had to click on the "Custom domains" tab and the "Set up a custom domain" button, so that I could map www.codingwithjesse.com to this new subdomain.

  7. Since I already had my domain on Cloudflare, all I had to do was confirm the new DNS records and it was ready to go!

If you're new to Cloudflare, there will obviously be other things you have to do to get set up here. But it's also possible to use Cloudflare Pages without using Cloudflare DNS - you'll just have to manually set up the CNAME records in your DNS provider. Don't worry, Cloudflare walks you through that process.

Deploying to Cloudflare Pages the easier way

The zip file approach worked great as a first test, and I actually used the same zip upload method a dozen more times as I made small edits to the site. But that got tiring, so I wanted to figure out how to deploy my changes automatically and programmatically from my command line. Turned out this approach was just as easy as using the dashboard.

Cloudflare's command line tool is called Wrangler. This tool is how you can easily interact with Cloudflare and deploy to Cloudflare Pages.

To get it working, I needed to have two things in environment variables: an API key, and my Account ID.

I went and set up an API key that only has access to Pages on my Cloudflare account. I went to the API Tokens section of the Cloudflare dashboard, and created a new token. I added only one permission to the token: Account > Cloudflare Pages > Edit.

I also copied the account ID from my dashboard to use in the environment variable.

I had to run CLOUDFLARE_ACCOUNT_ID=theaccountid CLOUDFLARE_API_TOKEN=thisisthetoken npx wrangler pages publish ./build, telling it to upload all the files in my build directory. It asked me if I wanted to create a new project or use an existing project. I chose "Use an existing project", and was able to see my "codingwithjesse" project right there to select it.

It uploaded the files, and Success! It gave me a temporary deployment subdomain where I could verify that the changes I wanted are correct. Uploading this way was much faster, as it only had to upload the files that had changed.

This actually didn't update my production site. To push directly to production, and to skip the question about which project to use, I had to run CLOUDFLARE_ACCOUNT_ID=theaccountid CLOUDFLARE_API_TOKEN=thisisthetoken npx wrangler pages publish ./build --project-name codingwithjesse --branch main

Making a private bash script

You shouldn't put your API key in your git repo, so make sure you don't put it into your package.json or commit it anywhere by accident.

To avoid this, I usually create a simple bash script push.sh which is in the root of a lot of my projects. I add push.sh to my .gitignore so it won't be committed by accident. The contents are simply like this:

#!/bin/bash

npm run build

CLOUDFLARE_ACCOUNT_ID=theaccountid CLOUDFLARE_API_TOKEN=thisisthetoken npx wrangler pages publish ./build --project-name codingwithjesse --branch main

You'll have to run chmod +x ./push.sh to allow it to execute. After that, you can build and push the site just by running ./push.sh.

There are other ways to manage your environment variables and secrets, but this is the approach that works well for me for a lot of projects.

Lots of possibilities

Cloudflare Pages can integrate into your GitHub repo and other deployment pipelines, so that whenever you push your changes live, it'll build automatically. This doesn't work for me for this blog, because the content is in a database and doesn't live in my git repo, but might be a good option for your project.

If you're interested in learning more, check out the Cloudflare Pages documentation. There are examples for pretty much every framework out there, so you should have no problem figuring out the best way to deploy your static site.

Published on March 4th, 2023. © Jesse Skinner

Debugging a slow web app

A watercolour illustration of a robot slowly wading through a swamp

I got an email today from one of my clients, letting me know that one of his web apps was down. He said he was getting an error and asked me to take a look.

In the end, I was able to fix it and get it running faster than ever. What caused it turned out to be a huge surprise to me. I thought I'd outline the steps I went through here, to try to help others trying to solve similar problems in the future.

See for yourself

The obvious first step is to go see for myself. I loaded up the web site, wondering what kind of error I would find. The site initially loaded fine, but then a spinner would appear, and seemed to get stuck. After a long while, the main content of the site failed to load, and an error appeared about failing to parse JSON.

I opened up dev tools and refreshed, keeping an eye on the network tab. Most things were loading fine, including static assets. I noticed that there were some fetches to the web site API that were taking a long time. They eventually failed with a 504 gateway timeout error.

The web site is behind a load balancer, and I know load balancers generally have a timeout limit around one minute. So all I knew is that these API calls were taking longer than that. I could only assume they would eventually succeed and were simply slow, but I wasn't totally sure.

Try to reproduce

Fortunately, I already had a dev environment for the site set up locally. I didn't write the whole application, but I had recently made some performance improvements to parts of the site. I wasn't very familiar with the API side of things though.

Sure enough, it started up fine, and the data all loaded correctly. So I figured it probably wasn't an issue with the code itself.

I started to wonder what could have happened to break the site all of a sudden, when it had worked fine in the past. Did the server auto-update some dependency that was breaking something? Was the server out of disk space? Was the database out of memory?

Getting close to the metal

My next step was to actually ssh into one of the servers to try and see what's going on there. Everything seemed ok. I ran free -m to check on the memory, but the RAM usage was fine. I ran df -h to check on disk usage, but none of the disks were full. Running top, the CPU usage looked fine as well. I was a bit stumped.

I turned to look at the database. This site is running on AWS, so I logged on to the RDS admin in the console and checked the graphs in the monitoring tab. Everything seemed fine there too. CPU wasn't too high, there were barely any connections, and the database wasn't out of space.

Still, I knew these API requests were hanging on something. I went back to the code and looked at the API endpoints in question, and all they did was make a database query. At this point I was pretty sure it was database-related somehow.

Going into the database

I decided to log in to the production database using the mysql command-line tool. The database is not accessible to the public, so only the production web server has access. I'd never gone into there before, so I looked at the config file for the server-side application to find the credentials (hostname, username, password and database name).

This is a MySQL database (MariaDB actually), so once I got in, the first thing I ran was SHOW PROCESSLIST to see if there was anything there. There were a ton of queries in there, many of them had been running for more than a minute. One had been sitting there for almost an hour!

Optimizing queries

Finally, I found the problem. All the slow queries were working with a single table. There was a mix of SELECT and UPDATE statements, so I figured the table was probably missing indices or something, something to make the queries run slowly.

I called SHOW CREATE TABLE xyz on the table, to see the structure of it. I was wrong, there were lots of keys on the table. Knowing that MySQL will only use one key per query, my first guess was that the problem was that maybe there were actually too many keys on the table, and the table would benefit by having fewer keys with multiple columns in it instead, targeted at these particularly queries.

I tested my theory by hand writing a simple query to see how slow it would be. It was slow, but it only took about one second, not a minute. So it wasn't that.

I wondered if I was missing something. Calling SHOW PROCESSLIST shows a summary of queries, but it cuts them off. A quick DuckDuckGo search later, and I found out you can call SHOW FULL PROCESSLIST to see the entire query.

It was then that I discovered what the problem was. The query was written exclusively using LIKE statements instead of =, eg.:

SELECT *
FROM xyz
WHERE thing_id LIKE '12345678'
AND status LIKE 'ok'

Even the update statements used LIKE:

UPDATE xyz
SET status = 'ok'
WHERE id LIKE '12345678'

I found it unconventional to say the least. But was that really causing the problem?

I changed my hand-written query to use LIKE instead of =, and sure enough, I had to Ctrl+C to abort it after a long time of waiting.

I realised that yes, of course, this would slow things down. MySQL must be scanning the entire table, converting the IDs from numbers to strings, and doing pattern matching on each one. No wonder it was running so slowly!

Closer to a solution

I searched the code base for "LIKE" and found the cause. Buried in a custom query builder a past developer had assembled, it would only use either LIKE or IN for all query parameters, including in UPDATE statements.

I'm not totally sure what the developer was thinking here. Were they making it so you could search on any field easily? I'm not sure, because I wasn't able to find an example where fields were actually searched on anywhere. We may never know.

The problem was in the code base

I was surprised the problem actually was in the code base itself. It made sense to me, this isn't the kind of problem that would have shown up locally, or even when the site first launched. It would have grown slowly as the table size grew, and apparently only became a major issue once the table had over 750k records in it.

The solution seemed straightforward to me. I modified the API endpoints that used this table, and rewrote the queries directly instead of using this query builder code. (Side note: I've never liked query builders, and this is an excellent example of why!)

I would have liked to modify the query builder to replace LIKE with =, but because I'm not sure if that functionality was needed elsewhere, I thought it best to leave it alone, and migrate away from the query builder instead.

Ship it!

Last step was to commit and push the code, and rolled out an updated version of the system. Shortly after the new version went live, I logged in to the database again and ran SHOW PROCESSLIST. Nothing but a bunch of idle connections! Perfect!

I went over to the AWS admin panel, and sure enough, the "Read IOPS/second" chart had dropped from a steady 100 down to 0! That was a nice reassurance that things were massively improved.

The site wasn't just working again, it was faster than ever!

Lessons learned

There are always lessons to be learned from any outage. Here are a few I learned today:

  1. You should definitely not use LIKE in your database queries to match numeric, indexed IDs. I've never done this, but now the fact has been ingrained deep in my brain.

  2. You probably shouldn't write your own query builders. Again, I've never liked query builders so this just reaffirmed my belief.

  3. You should maybe think about testing your web apps with a very large amount of dummy data in the database sometimes. I've never done this, as it's slow and seems a bit excessive, but I think I may start trying this in the future, particularly on systems where I expect the tables to grow enormously over time.

  4. SHOW FULL PROCESSLIST is a thing. Okay, that's not so much a lesson as it is something I learned that wasn't already in my tool belt.

All in all, the whole process took about an hour, and I'm glad I was able to get things back up and running quickly. Beyond the lessons learned, I got to know the system a little better, and have a lot of new ideas of ways I can improve and speed things up in the future, so it was definitely a worthwhile experience!

Published on February 23rd, 2023. © Jesse Skinner

Web apps that last

A castle made of sand

When you're building a new web application, or even a new feature, how can you ensure that you're not creating a nightmare code base that will need to be rewritten completely in a few years?

Some people will say it's hopeless to even try and write code that will last. I've even heard people suggest that you should aim to rewrite all your code every few years. That sounds like a very expensive, wasteful strategy.

In two decades of building web apps, I've seen many codebases start as a shiny new prototype and grow into a huge system. In some cases, they've become old, ugly, painful legacy systems that teams are begging to rewrite and replace. (And often those rewrites will themselves grow into ugly, painful legacy systems!) But sometimes a codebase will remain more or less unchanged a decade later, running smoothly as ever.

I believe there are some decisions you can make when writing code that will help it to last longer, and withstand the test of time.

Change is inevitable

Probably the one thing you can be sure of is that change will come. The goals of a business will change, and the people within a business will change. There will inevitably be features added, and existing features will evolve and be repurposed. The names of the products will almost surely change. So is it even possible to write code that doesn't need to change?

I think the key is in the phrase "If it ain't broke, don't fix it". Code that is fulfilling its task, that is doing what it's supposed to do, and is bug-free, is code that will last a long time.

Nightmare code

To understand how to write code that will last, let's think about the opposite: a nightmare codebase that demands to be rewritten. The worst I've seen is a web server written as a giant single file with thousands of lines of code. A system built like a house of cards, where changing one thing will break everything. Code that is very difficult to read or understand. Code that literally gives developers nightmares.

Unfortunately, this is often the kind of code that comes out of throwing together a quick prototype. A hero developer stays up late one night and churns out a first draft like a stream-of-consciousness. The next morning, the business owner is delighted to see their dreams come to life. Everyone's happy.

Then, they ask to change just one thing. Add this little feature. And this other feature. And now this user needs this other thing. And could you just change that other thing quick?

Months later, and this rough draft has accidentally become the foundation for a web application that continues to grow, held together with digital duct tape.

So how do you prevent this nightmare from unfolding?

Do one thing, and do it well

Modularity is extremely important in writing code that will last. And a good module is a piece of code that does one thing, and does it well. That one thing might be interfacing with a single database table. Or it could be handling HTTP calls on a single URL and passing the data to and from other modules that talk to the database.

I find generally that it works best when each module has zero, one or two major dependencies. With zero dependencies, you have a set of functions that receive input data, process it in some way, and return results. With one dependency, you have a set of functions that act as an abstraction or interface to that dependency. With two dependencies, you're writing code that bridges the gap between the two, acting as an adapter or controller.

More than two major dependencies, and you should ask yourself if there's any way to split things up into smaller pieces that are responsible for fewer things.

A dependency might not always be a program you have to install. Another module in your system is also a dependency. So is your business logic. I think about dependencies as anything that your module "knows about". This could even be the shape of certain data structures that might not have explicit type definitions.

The fewer things a module knows about, the more likely the module will be able to persist unchanged over time, because there will be fewer reasons to change it.

When your web application is built with small, independent modules that only do one thing, the chances are much, much lower that any of those pieces will need to be rewritten. And the chance of the whole application needing to be rewritten all at once drops to nearly zero. Even if you later want to do a major redesign, you'll find it easier to copy over lots of these older, simple modules to reuse in the new system.

Finally, a tangible example

Let's say you need to send out a Forgot Password email. You could do the whole thing in one file, but I would prefer to split it up like this:

  1. A module that knows how to actually send an email using AWS SES or something, but doesn't know the recipient, subject or body of the email. function sendEmail(toAddress, subject, body), for example.

  2. A module that knows about the subject and body of the Forgot Password email, but doesn't know who it's sending to or what the reset URL will be. function sendForgotPasswordEmail(toAddress, resetUrl)

  3. A module for the user table in the database, that has a function to generate a reset code, but doesn't know how the reset code will be used or even whether an email will be sent out. function createResetCode(userEmail)

  4. A module that knows about the URL structure of the site, and has a function that can generate a password reset link from a reset code. function getResetUrlFromCode(code)

  5. A module that ties everything together. It takes an email address, calls createResetCode, uses that to call getResetUrlFromCode, passes that to getForgotPasswordEmail, and sends the recipient address and email body to sendForgotPasswordEmail. forgotPassword(email)

  6. A user interface widget with a form, a text field and a button, so the user can type in their email address and click Send password reset link. When the form is submitted, it tells the user to go wait for the email.

  7. A module that is responsible for the server-side password reset part of the system. It receives the form submission, pulls the email address from the form data, calls forgotPassword, and then sends a success status back to the browser.

Here, only a few modules are likely to change. You'll probably see changes to the sendForgotPasswordEmail function, as well as the user interface widget. All the other modules I've outlined are very reusable, and highly unlikely to change, unless you change your email sending provider, or your database software, or something else major. Even in those situations, the code that needs to change is very isolated and easy to replace without affecting anything else.

You can even improve on this further, by having the contents of the email be database-driven, so that non-technical staff members can change the email templates themselves through an admin interface. But an architecture like this is a good starting point that makes those sorts of changes simpler to make.

A good start

If you get in the habit of writing more modular code, and splitting things up as early as possible, then the next time you're throwing together a quick prototype, you'll be able to lean on those principals in the process.

Instead of a giant ball of tangled dependencies and logic, you'll be building smaller, simpler, reusable components that can be used as solid building blocks. Some of these will be so useful and generic that you'll even be able to reuse them in completely different systems without changing them at all.

Published on February 19th, 2023. © Jesse Skinner

Trying to decide what to do next? Follow the light.

A tree growing towards the light

Happy New Year! I've been trying to come up with a New Year's resolution, and it got me thinking about setting goals, finding and following your purpose, and how this ties into some books I read this year.

TL;DR: If you're trying to decide what to do next in your life, in which direction you should expand and grow, maybe it helps to think like a tree and go where the sunshine is.

The purpose of life

The most interesting book I read in 2022 was The Romance of Reality, where Bobby Azarian does an amazing job applying Darwinism to the universe.

He says that the universe itself is a self-organizing system, with a bias towards increasing order, complexity and awareness. The idea is that the process of evolution was around before life emerged. Life formed at the bottom of the ocean at thermal vents, where tremendous amounts of extreme heat energy met extreme coldness. Eventually, the first forms of life emerged here to capture that wasted energy and put it to use.

Fast forward to the present, and now we have complex life everywhere we look, actively consuming any and all food and energy available and using it to maintain the structures of our bodies, our systems and our species.

As lifeforms, humans are sentient agents of the universe whose purpose is to use our awareness and intelligence in order to optimise the conversion of available energy into complexity and order.

It's not hard to see how true this is. So much of what we do boils down to consuming energy (food, fuel, heat) so that we can create more order (clean homes, growing families, bigger cities, information, content creation). Pretty much every job is related either directly or tangentially to this process, or optimizing the process.

We've even dug towards the centre of the earth and the centre of the atom in order to unlock and consume more and more available energy and use it to create increasingly complex systems and structures.

So how do you fit into all this? And how can you use this perspective and knowledge to live a good life?

Following your dreams

I just finished reading The Alchemist by Paulo Coelho. It's a story about following your dreams. It's about a shepherd boy in Spain who dreams about finding treasure at the pyramids in Egypt. Following the guidance of those he meets along the way, he goes on a quest to literally follow his dream and see where it leads.

I've always liked stories about following your dreams. I've always tried to follow my own dreams. Once upon a time, I was stuck in the proverbial office job, and dreamed of the day I could be working from home, setting my own hours, choosing work I found interesting. I dreamed about buying a house, getting married and having children.

I followed those dreams, and soon started freelancing. Several years later, I bought a house, got married, and now have the family and life I'd always dreamed about.

So now I'm looking to the future, wondering where to go from here. There are so many possibilities that it's hard to focus and hard to decide.

Thinking like a tree

We're all in search of our potential. It's not just about finding happiness, but also finding activities that won't burn us out, that are sustainable in every sense of the word.

A tree will put more energy into the branches that get more sunshine, because opportunity creates a void that must be filled. Nature abhors a vacuum.

Thinking about our careers, interests and opportunities, it's as if we are trees with branches growing out in many directions, and we are trying to decide whether we should grow in this direction or that.

Like a tree, we can feel the energy coming from each branch to decide whether it's getting more sunshine or less. Sometimes this process is described as market research, trying to establish whether there is demand for a particular enterprise. Is it time to hire an employee? Or write a book? Or start a non-profit? Or teach? Or make a video game? Or take time off and travel? Or just hunker down and work harder doing the same things as ever?

Energy can take the form of light, or heat, or money, but also inspiration, joy, excitement and motivation. Which activities will give you a turbo boost and allow you to grow and expand further? Which are a dead end? You can also see which of your branches are already giving you more energy back. And you can expand carefully, incrementally, to get more feedback, to see if these directions are the right directions for you.

They need not expand forever in any given direction. Maybe there's a ton of energy available to move in the direction of, for example, publishing one small video game. But maybe that's also where it stops, and going all-in on video game development would be a terrible mistake. Or, maybe it opens up a new opportunity, one nobody could see or feel from here?

Follow the light

Follow your passions? What does that even mean? Instead, follow the light.

What is shining brightest to you right now? Where are your branches expanding to, and which of those branches are shouting "Go this way!!" Optimize for excitement. Learn how to convert some of that sunshine into food, by bringing joy to others (aka "providing value") such that others will be happy to give you sunshine tokens (aka "money") for the joy you bring.

If something excites you, it'll likely excite others. Because you do not live alone in a desert. If you can capture a bit of that sunshine out of the air, you can make it available to the whole world. And when you do, more energy will flow back from the world to you, as if to say "Yes, keep going!"

Follow the light. Capture excitement out of the air and share it with the world.

Published on January 1st, 2023. © Jesse Skinner
<< older posts newer posts >> All posts