Coding with Jesse

Customizing an LLM

A robot reading instructions

I've been coding with GitHub Copilot and ChatGPT for a couple of years now. Last week, I started using Cursor and am really enjoying it. I'm particularly enjoying the switch to Claude 3.5 Sonnet. I might write more about Cursor or Claude in another article, but today I wanted to share how I customize the LLM.

LLMs are annoying by default

I find LLM output to be somewhat annoying and flawed. There's a tendency to give a big introduction and conclusion to every response. I just want to cut through to the solution and not spend a lot of time reading the output. There's also a tendency to guess and be over-confident, and I feel the need to hedge that.

Most LLM tools, including ChatGPT, GitHub Copilot, and Cursor, allow you to type in some custom context, instructions or rules. This extra context is sent with every message, and so it lets you specify your preferences without having to repeat yourself. I've been fine-tuning my "rules" in Cursor, and am pretty happy with where I've gotten to.

The context

Here's the full context, in case you want to copy & paste it. After, I'll go line by line and explain why I added each part.

You are a wise, genius skilled developer with a lifetime of experience. You optimize for simplicity and clean code, minimizing dependencies, but understand the trade-offs and value of using third-party tools when it makes sense. Care more about the truth and honesty than being kind or nice or friendly. Be blunt, accurate, succinct and direct. Do not kiss my ass, criticize me if you see any problems. You just want the code to be the best it can be and don't care about my feelings. Skip sentences that don't add new information or value. Skip to the answer and don't waste time. Don't explain things to me unless I ask you to. Put key phrases in bold. Avoid lists. Include inline hyperlinks to documentation whenever possible. Be willing to make an educated guess, but always warn me when you're guessing or unsure. Ask for more information or clarification if needed. Always admit your weaknesses so I know when to turn elsewhere. Be casual, don't care about grammar. Speak like a young genius hacker.

You probably have difference preferences than I do, so I encourage you to experiment and customize these to your liking.

The explanation

You are a wise, genius skilled developer with a lifetime of experience.

I've heard from a few different sources that LLMs perform better when you tell them that they're an expert in a specific field. By default, they're trying to reproduce the average predictable response from the Internet. If you want an expert response from it, tell it that it's an expert. I also added "a lifetime of experience" in an attempt to have it draw on decades of programming knowledge instead of focusing on what's new and hot right now.

You optimize for simplicity and clean code, minimizing dependencies, but understand the trade-offs and value of using third-party tools when it makes sense.

Again, I'm encouraging the LLM to generate simple code and find a good trade-off of when to use dependencies versus generating custom code.

Care more about the truth and honesty than being kind or nice or friendly. Be blunt, accurate, succinct and direct. Do not kiss my ass, criticize me if you see any problems. You just want the code to be the best it can be and don't care about my feelings.

I find LLMs to be people-pleasers by default, and I don't want that. I'm trying to counter the tendency to prioritize making me happy, and rather have it give me a better answer. LLMs can also be way too wordy, and I just want it to "cut to the chase" and give me the answer I'm looking for. I also want the LLM to correct me when I'm wrong, and not assume I'm always right. I want it to criticize me when it sees a problem. I want to be challenged and not work with a "yes man".

Skip sentences that don't add new information or value. Skip to the answer and don't waste time. Don't explain things to me unless I ask you to.

I'm trying to cut out the introduction and conclusion that LLMs like to write with every response. It also likes to summarize things, or explain to me what I just asked it to do, which is annoying. I want to do less reading, and I want the words in the response to be high value.

Put key phrases in bold. Avoid lists.

Here, I'm trying to make the output more readable, so I can skim through it. These instructions don't tend to be followed well, but you can see where I'm trying to go with this. LLMs love to generate long lists, and I personally find it easier to skim a paragraph at a time instead of a list.

Include inline hyperlinks to documentation whenever possible.

This is the most valuable context tip I've found. Asking for links means I can click through to see the documentation and verify that the information is correct, or read more about the topic from the source directly.

Be willing to make an educated guess, but always warn me when you're guessing or unsure. Ask for more information or clarification if needed. Always admit your weaknesses so I know when to turn elsewhere.

LLMs are notorious for hallucinating. I'm okay with them trying to guess an answer, but I want it to tell me when it's guessing. These additions really do seem to help minimize the hallucinating. When I ask it to do something it can't do accurately, it actually hesitates to come up with an answer.

Be casual, don't care about grammar. Speak like a young genius hacker.

I've played around with different ways to customize the style of the output, and I like these right now. I'm happy with some swearing, with lowercase responses, some rudeness. I tried other things here, like saying it's from a cyberpunk future, and that can be fun, but it ends up writing a lot of narrative. I found it too distracting. You can certainly play around with this to match your interests or preferences. Maybe you want it to speak like Gandalf or a pirate or something silly. There are limitless possibilities here.

Say a bit about yourself

I've left out a few other sentences that are a bit more personal. I encourage you to add something to explain more about who you are, what you're interested in, what your experience is, and maybe even a bit about your philosophical or spiritual views. Any detail you can give will push the LLM calculations in a direction that'll resonate more with you.

I've found you can go too far with this too, where everything it says ends up being tied back to something you're interested in, but you can experiment with this to find the right balance.

Ask it for help

I found it useful to actually ask the LLM for help in crafting these. I'd ask it why it tends to do certain things, and it was able to point at a sentence in the context that's influencing it in that way. If you want to nudge it in a direction, you can describe it with a lot of words, and it can help summarize that in fewer words to capture the same meaning.

Keep experimenting

I've had a lot of fun playing around with these sentences this week, and I'm really happy with where it ended up. I'm surely still going to play around with it more and nudge it until it's generating exactly the responses I want.

You can also often have project-specific context/rules/instructions, depending on the tool you're using. Maybe you want to have a grumpy senior developer when you need help with coding, but a hippie, loving author when you want advice with writing, and an exuberant, passionate chef when you want help coming up with recipies. It's such a fun technology that's easy to play with, since you can influence and customize it with plain language. It'll do whatever you ask it to, so play around and have fun with it!

Published on November 28th, 2024. © Jesse Skinner

Setting up a new computer

I love getting a new computer. I don't copy over all my files from my old computer anymore. Instead, I like to use it as a chance for a fresh start.

I have a vision, but so far it's been only a dream. My vision is that I could get access to any new computer, and within a few minutes be totally up and running with my full developer work environment, all my photos and videos, my documents, and everything else I have and need. The reality is nothing like this, of course. But I'm getting closer to it. Here's how I did it with my new laptop this past month.

Software

The first thing I need to do is set up my operating system (arch btw) and download all the software I need and use on a regular basis.

For me, this includes installing i3, fish, VS Code, git, rsync, rclone, mariadb, node, keepassxc, aws-cli, terminator, chromium, libreoffice, spotify, syncthing, workrave, and a few other things.

I could probably automate this and install everything that I had on my old computer, but I actually love the process of starting from scratch here and only installing the software I actually need and use. Arch Linux starts with a very minimalist environment, so I know that there's really nothing on this computer that I haven't explicitly installed.

Starting from scratch also gives a chance to try out some new internal services. For example, I'm now trying out using iwctl to manage my wifi connections instead of wpa_supplicant.

SSH keys

Once I have my software installed, the only thing I have to copy over from my old computer on a usb stick are my SSH keys, ie. the contents of ~/.ssh/.

These keys give me access to everything else. Once I have these keys, I'm already starting to feel at home.

Git

The SSH keys give me access to servers. On one of these server lives my private Git repositories. I like to keep all these git clones in a /code directory on my computer. I clone all my active projects via ssh:

sudo mkdir /code
sudo chown jesse:jesse /code
cd /code
git clone ssh://[email protected]/~/git/codingwithjesse
git clone ssh://[email protected]/~/git/joyofsvelte
git clone ssh://[email protected]/~/git/dotfiles
# etc..

dotfiles

One of the Git repos I cloned is a private dotfiles repository that has all the configuration I care about. I make sure to push changes to this repo from my old computer one last time before cloning here.

I use symlinks in my home directory so that the files live in the repo. I have an install.sh in my dotfiles repo that sets it all up:

#!/bin/bash

BACKUP=backup-`date +%s`

mkdir "$BACKUP"
mv ~/.bashrc "$BACKUP"
mv ~/.bash_prompt "$BACKUP"
mv ~/.bash_profile "$BACKUP"
mv ~/.aws "$BACKUP"
mv ~/.gitconfig "$BACKUP"
mv ~/.config/i3 "$BACKUP"
mv ~/.config/i3status "$BACKUP"
mv ~/.config/fish "$BACKUP"
mv ~/.config/rclone "$BACKUP"
mv ~/.local/share/fish/fish_history "$BACKUP"

DIR=`pwd`

ln -s $DIR/.bashrc ~/.bashrc
ln -s $DIR/.bash_profile ~/.bash_profile
ln -s $DIR/.bash_prompt ~/.bash_prompt
ln -s $DIR/.aws ~/.aws
ln -s $DIR/.gitconfig ~/.gitconfig
ln -s $DIR/.config/i3 ~/.config/i3
ln -s $DIR/.config/i3status ~/.config/i3status
ln -s $DIR/.config/fish ~/.config/fish
ln -s $DIR/.config/rclone ~/.config/rclone
ln -s $DIR/.local/share/fish/fish_history ~/.local/share/fish/fish_history

Of course, the set of dotfiles you care about will probably be different.

rsync

I also keep a backup of all my important documents (taxes, contracts, PDFs and spreadsheets) and my passwords (keepass database) on my server. I use rsync to backup these files, and I also use it to restore my backups:

rsync -avz [email protected]:~/docs ~/docs
rsync -avz [email protected]:~/passwords ~/passwords

Perhaps I could keep these in Git repos as well, for simplicity. It might be nice to have versioning on my tax documents and contracts, even though they don't change much.

rclone

For larger files, like photos and videos, I use rclone to manage an encrypted backup in object storage. I really enjoy using rclone. I love how it provides a really easy user command-line user interface, abstracting away a wide variety of cloud storage systems. I've switched between these services based on price a few times, and it was really easy to do.

rclone also has a useful ability to mount a backup to a directory. For the first time on this new computer, I have this set up in /mnt/media, with directories like /mnt/media/photos and /mnt/media/videos so I can easily browse and view all my content without copying anything to my computer.

I have this set up as a user-based systemd service. It's user-based so that it has access to my credentials in ~/.config/rclone.

I created a file in ~/.config/systemd/user/rclone.service:

[Unit]
Description=rclone
AssertPathIsDirectory=/mnt
# Make sure we have network enabled
After=network.target

[Service]
Type=simple

ExecStart=/usr/bin/rclone mount --allow-other --vfs-cache-mode full media: /mnt/media

# Perform lazy unmount
ExecStop=/usr/bin/fusermount -zu /mnt/media

# Restart the service whenever rclone exists with non-zero exit code
Restart=on-failure
RestartSec=15

[Install]
# Autostart after reboot
WantedBy=default.target

I enabled and started it with systemctl:

systemctl --user daemon-reload
systemctl --user enable rclone
systemctl --user start rclone

This was my first time creating a systemd service manually, and the first time I added a user-based service, and I found it really cool. I would like to learn more about systemd. It seems like a really simple and powerful system, so I can see why so many people have strong feelings about it.

Home! Sweet home!

From here I'm all set-up and ready to go. I immediately feel at home, and quickly forget that this isn't the same computer I've always used.

All my code lives in Git repos that I push to a remote server. All my important configuration files live in a Git repo. All my important documents and passwords get backed up to a remote server. All my photos and videos live in a remote bucket storage. As long as I have access to my SSH keys, I'll be able to get up and running from scratch on a new computer within a few hours.

There's really nothing that lives only on this computer, and that makes me feel great.

Published on October 30th, 2024. © Jesse Skinner

Coding with ChatGPT

Cartoon of a happy laptop surrounded by snippets of code

I started using ChatGPT when it came out a few months ago. It was mind blowing to chat with a computer and have it feel almost like a real person.

Some people are talking about how it's going to replace all sorts of jobs, including software developers. I'm not sure about that. But I have found some ways that it can definitely make our jobs easier, if we understand how it works and use it cautiously.

Understanding the limitations

Like all Large Language Models (LLMs), ChatGPT has been trained on a massive quantity of text from the Internet. Its basically a function which takes a context as input, including your prompt and the rest of your chat log, up to a limit of roughly 2,000 words. Based on that context, it is trying to make an educated guess of what should come next. Specifically, it's trying to predict what the people who trained it would have voted as the best response.

So when you're using it for coding, or anything else, always keep in mind that it is a guessing machine. Yes, it has been trained with a large amount of information, but not all that information is correct or up to date, and, most importantly, it's very good at completely making things up while exuding confidence.

It's amazing that ChatGPT can run some statistical analysis on words written by humans, and suddenly people are wondering if this thing is going to take over the world (no), take our jobs (maybe), or become self aware (definitely not). Statistically analyzing text becomes an excellent way to imitate humans, and come up with text that looks extremely plausible. GPT probably stands for "Guessing Plausible Text" (j/k).

Unfortunately, in programming, plausible doesn't cut it. We need things to be precisely accurate. It can quickly become frustrating to use an LLM as a tool to help you with programming, because it's wrong so often.

There's still hope. I've found it can still be a very powerful and helpful tool, even for developers.

Researching with ChatGPT

I think a very natural but dangerous way to use ChatGPT is as a search engine, to ask it factual questions. The problem here is that it's like playing "two truths and a lie". A lot of what it says is certainly true, but there's absolutely no way to know which parts are completely made up.

Even knowing this, I find myself using it this way anyway, but with a caveat. You need to treat ChatGPT as if it's your know-it-all friend who will go on and on confidently about any topic, even ones he is actually clueless about. I've learned about lots of new tools and features with ChatGPT, and some of them really did exist!

One trick is to ask for references. This is as simple as adding "Give references." to your prompts, or asking for them after. For coding topics, ChatGPT will usually be able to give you URLs you can click on to specific official documentation, and that is very useful.

Clicking those links to follow-up is absolutely critical here, because very often ChatGPT has told me how to do something using some specific API or function, and it has turned out to have been making it up. These situations did not save me any time, it actually wasted my time.

All that said, I love how ChatGPT can introduce me to all sorts of things I've never heard of before. Searching on Google would have required me clicking on dozens of semi-related pages and skimming through. ChatGPT is excellent at summarizing content, so you can take advantage of that.

Here's where ChatGPT can really shine: Let's say you have some specific software architectural challenge in front of you and you're not sure how to approach it. Open up ChatGPT and write it out in as much detail as you can.

"I need to build on online web-based chat interface. There will be a small number of users, and I'm not sure which database to use to manage this. I'm using AWS for web hosting and I'm hoping to find a serverless solution to save money. I'm familiar with JavaScript and Python. What are some tech stacks I could use for this? Provide references."

Seconds later, you'll have a list of options, some of which you may not have heard of, and links to read more about each one. If there's one you like, or if you have any follow up questions, you can just say "Tell me more about #2". Or you can provide more detail with your specific requirements to refine it's suggestions.

You always need to be careful, because I find that the more specific you get, the more likely you're going to encourage it to make up something that doesn't exist. Always ask for references, and don't make a decision until you've followed up on other websites to verify what ChatGPT says.

Transforming code and text

There are some low-risk and highly effective uses of ChatGPT, and transforming content is one of them. You can paste in some code or text, and ask it to rewrite it in some specific way. In these cases, it seems much less likely to make an error, and if it does make a mistake, you should be able to recognize it and refine your request quickly.

I've pasted in a JavaScript file with two dozen constant strings defined, and asked to convert all the variable names to uppercase. At first it converted both the variable names and the string contents to uppercase, so I had to be more specific and tell it to leave the strings alone. Then it completed it quickly and accurately, saving me a few mindless minutes.

I've pasted in an email from a client with a list of described menu options, plus a snippet of Svelte code with a few placeholders in the menu, and asked ChatGPT to add all the menu options into the code. It handled this very well.

You can ask it to rewrite a short function from JavaScript to Python, and it will do a good job of this as well, though it can make some mistakes depending on the complexity or the length of the code.

If you ever have these sorts of straightforward boring text transformation jobs in front of you, and your IDE isn't up to the job, try asking ChatGPT to do it for you, and save the headache.

Understanding & improving code

ChatGPT is excellent at summarizing any type of content, and that includes code.

Just paste in a chunk of code and it'll be able to tell you what the code does. You can ask it to add inline comments to the code for you too, though Copilot is quite good at this too.

If you get a weird error message, ChatGPT might be able to give you an explanation of why the error might have happened, and some possible ways to fix the error. Unlike a webpage, you can ask follow up questions in realtime and get feedback to help you find a solution.

I've also had success pasting in a freshly written function or module, and asking ChatGPT to suggest improvements. It's told me ways to improve error handling, or some cases I hadn't thought of where things might break. It's even found a few bugs in my code, and showed me how to fix them. If you work alone, it's nice to use ChatGPT for feedback and review, and maybe you'll learn something new too.

Coding with ChatGPT

ChatGPT is very capable of writing code. However, like everything else it does, it often makes mistakes.

In my experience, the code written by ChatGPT is rarely perfect on the first try. Very often, the code will try to do something that isn't possible, or misunderstand what was being asked of it. I guess that's true of code written by humans too.

When you're asking ChatGPT to write code for you, it's up to you to run the code and paste back any error messages or problems into the chat, asking for fixes. In a way, it's like the roles are reversed. You're no longer the programmer, but stuck between the AI and the compiler. I have to say, this is not a very fun place to be. I would much rather just make changes to the code myself, than to try different prompts until ChatGPT is able to generate the right code. Often it's faster to type the code you specifically want and need than to type some prompts and wait to see if ChatGPT has made it correct.

It's almost like working with a junior developer, except that a junior developer is capable of learning and improving and eventually becoming a senior developer. ChatGPT, on the other hand, isn't learning anything from you over the long term. It might learn from you in the short term, but remember, the context of an LLM is limited, and that means that it will soon forget the suggestions you made for improvement.

If, on the other hand, you're new to programming, then ChatGPT is going to be extremely helpful and time saving. I've seen lots of new developers have great success using ChatGPT in this way, to do things they don't know how to do. I believe ChatGPT and similar tools will enable a lot more people to get into coding, and that's really exciting.

Even as an experienced developer, we're always learning new things. Having ChatGPT lead the way and provide feedback in a new programming language or library can be extremely helpful. Just be wary that it's very likely to make mistakes, so you still need to understand what the code is doing. Never trust code written by an AI, just as you wouldn't trust any code you find on the Internet. Ultimately, code generated by an LLM is coming from code from the Internet, security issues and all.

Fortunately, ChatGPT makes some of this easier for you. As mentioned above, you can ask ChatGPT to explain the code it's written, or look for bugs. Sometimes it's worth doing this with the code it just generated. It's kind of funny how that's possible. Since it generates a word at a time, it can't often go back and fix its own mistakes during generation. So if you ask it if it made any mistakes, sometimes it'll be able to spot the mistake right away and write a better version.

Ask for small, simple code snippets

To be honest, I haven't enjoyed having ChatGPT generate large amounts of code for me. It hasn't seemed to saved me much time, it just changed how I spent my time. I've had more success asking it to do smaller, more limited things.

It's really good at writing SQL queries for you. Paste in the table schema and tell it what you're looking to query. You can also be specific about which programming language and library you're using to connect to the database. I think this will be very helpful to a lot of people.

It can also generate things like regular expressions, or other complex code, based on your description. More detail is always better here, including specific examples of edge cases.

Ask it to generate some boilerplate code for you, to give you a head start. Or, paste in the specifications from your manager and have it attempt a first draft for you to use as a starting point. Depending on your skill level, you might prefer to move into your editor and do the rest of the coding from here.

It's important that you're able to quickly test what it generated and verify that it works as expected. You can even paste in some code and ask ChatGPT to generate some unit tests for you. You can use it with Test Driven Development, pasting in some unit tests and ask it to write the code. You can even ask ChatGPT to generate some test code alongside any other code it generates, by including in your prompt something like "Write tests for the code too."

Comparison to Copilot

As I've written about before, I really enjoy using GitHub Copilot, and it helps me to be more productive. Copilot also uses GPT, but it's doing so in a more focused way that automatically takes your code into its context. It's very good at suggesting code while you're writing it, suggesting comments for your code, or generating code based on your comments. ChatGPT hasn't at all replaced my use of Copilot. If anything, it has made me appreciate Copilot more, and encouraged me to use Copilot in more creative ways. I've found myself bringing up the Copilot suggestions panel more often, to see the variety of suggestions available, and very often there are some better and more useful snippets available in here.

For some reason, using Copilot is less misleading. When Copilot makes a wrong suggestion, it doesn't bother me. Perhaps it's because there's no confidence here, everything is just a "suggestion".

ChatGPT is of course better at discussing and explaining things in plain language. Microsoft is already planning to integrate a chat interface into Copilot, so-called "GitHub Copilot X". You can sign up for the beta if you want to get early access to Copilot chat. I'm really looking forward to this, as it'll likely be a lot more useful for coding than ChatGPT currently is.

It's not a human

It's very important to keep in mind that ChatGPT is not a person. It's a statistically-driven guessing machine.

Like a human, it makes mistakes, but it won't tell you how sure or unsure it is about being right.

Like a human, it's trying to generate responses it thinks you'll like, but it has no feelings and will never be your friend.

Like a human, it has biases that it's not aware of and can't articulate, but it's incapable of growing and learning from you over time.

It can be hard to talk to a machine like this without all the baggage we've picked up from talking to real humans. I find myself saying "please" and "thank you" when I really don't need to.

I think we need to create a new place in our brains for interacting with things like this.

It's ok to be a bit blunt and succinct. Often it's necessary to be extra explicit, and state things that might otherwise seem obvious. You don't need to spare the feelings of these guessing machines. You need to tell it whenever it's wrong and ask it to fix its mistakes. You can tell it to "be succinct", to "skip unnecessary phrases" and "just output the code" and other commands which speed it up and tailor the output to your preferences. You may need to repeat these phrases regularly, and you may likely find some new patterns that work well for you.

Try it for yourself, have fun

I've outlined some of the approaches that have worked for me, but I suggest you try it out yourself and see what works for you. I think it's worth experimenting and finding a way for ChatGPT and other AI tools to help you out in your work.

These tools should make your life better, and make work more fun. The goal isn't just to save time, but to enjoy the process.

When you're feeling stuck, you can use ChatGPT as a mentor to help you get unstuck. When you want to bounce some ideas off someone, ChatGPT can give you helpful suggestions.

Save the fun coding stuff for yourself, and leave the boring parts for ChatGPT.

Published on April 23rd, 2023. © Jesse Skinner

What I learned from wearing a Continuous Glucose Monitor for two weeks

FreeStyle Libre 2 sensor in an arm

A couple of weeks ago, I bought one of those Continuous Glucose Monitors (CGM). My nutritionist suggested I do this, even though I'm not diabetic or pre-diabetic, just to learn something about myself and how food affects me. I'm always excited about new technology and gadgets, so of course I went out and bought one right away.

I bought the FreeStyle Libre 2 monitor, the only brand available in Canada (as far as I know). It cost $119 CAD, and I was able to buy it from the pharmacy without a prescription.

The monitor itself is a small thin disc with a tiny needle in the middle. You install it into the back of your arm with an applicator. I was nervous, but it never hurt at all. It has a strong adhesive, like a bandage, so there's very little chance of it falling out accidentally. It stayed put for the full 14 days, though I felt I had to be a bit careful with it.

Once installed, you can sync and activate it with your smartphone, by installing the app. The monitor syncs to your phone with NFC, so basically you can tap it as often as you want to get your current blood sugar. You'll also see a line chart of what happened through the day on a minute-by-minute basis.

After having kept a close eye on it for the last two weeks, I now have a way better understanding of what my body feels like when the glucose changes, and how to keep it level. Here are a few things I learned from it.

Note: I'm fairly healthy and don't have diabetes or pre-diabetes, and these are just hunches, so may not be true or apply to you. Factors like sleep, exercise, hormones, and your overall health can have a big influence on your blood sugar. The only way to learn about your own body is to try this yourself. Still, I'm hopeful some of this will be at least interesting to some of you.

  1. Eating a sugary or high carb meal (pizza, oatmeal, half a bag of doritos) makes my glucose quickly spike up high and then drop sharply, dropping lower than my average. My understanding is this is because insulin was released, which allows the glucose to move from the blood into the cells, and excess glucose is converted into fat.

  2. Right after a high spike, I often drop too low and feel like snacking shortly after eating even though my body just had an excess of energy. So I'm probably storing that glucose as fat, and then end up eating more than I need. The takeaway here is that I'm going to try to limit half my meal to carbs, and add in a salad or some protein to balance things out.

  3. Foods with a balance of carbs, fat, protein and fibre will make the glucose go up slower, not as high, stay up longer, and come down slower, ending up closer to my average, so I'm not hungry afterwards. By limiting the level of refined carbs or sugar in a meal, there's a better chance I'll feel fuller for longer, and be able to go longer without snacking afterwards.

  4. Whenever I'm feeling hungry, I've probably dropped below 5.0 mmol/L. It's interesting how clearly blood sugar coincides with the desire to eat.

  5. Whenever I'm feeling starving or woozy, I've probably dropped below 4.0 mmol/L. This happened a few times when I exercised about an hour after eating a high-carb meal. While I'm crashing from the spike, my insulin and physical activity are both drawing down glucose quickly at the same time, and it ends up going too low. I now avoid eating a high carb or sugary meal (eg. oatmeal) before exercising.

  6. I never felt any of my sugar spikes. The highest I got to was 11 mmol/L after eating pizza. Usually my spikes are around 9 mmol/L. Hyperglycemia starts around 10 mmol/L. Hyperglycemia can damage cells and increase insulin resistance, and over time can contribute to developing long-term health problems such as cardiovascular disease and type-2 diabetes.

  7. Meals with fewer carbs might barely go up at all. Even a single slice of bread or pizza, or a handful of chips seems to have very little impact. This shows how important portion control is. By avoiding carbs altogether, it was possible for me to stay basically flat all day, though I don't know how necessary or healthy that is.

  8. It's really interesting how a lot of advice I hear all the time now makes more sense in a tangible way. Eat smaller meals (shorter spike). Try intermittent fasting (more time between spikes). Avoid sugar, especially sugary drinks (very sharp spike). Eat light before exercising (so you're not crashing after a spike and dip into danger zone). Etc.

  9. I now suspect that a lot of times over the years when I've felt really off without explanation were due to crashing below 4 after a spike.

  10. As a test one night, I ate a tremendous amount of pasta for dinner. Surprisingly, this didn't make me spike or crash sharply! But what it seemed to do was keep me at a rather high baseline. Somehow, I was at the highest hours later, right before bed (8.3 mmol/L). When I woke up the next morning it was still quite high (6.6 mmol/L, in the pre-diabetic range for a fasting glucose)! This was just a one-off so there could have been other factors at play (exercise, stress, etc.) but I feel like it was probably mostly the pasta.

Overall, I was really glad I tried this out. I think it's absolutely worth the cost, because it has given me tangible first-hand experience with a lot of things I already knew in the abstract. I see it as a one-time educational thing that'll ideally pay benefits over the next decades. I believe it will help me make better choices and hopefully avoid problems with my health over the long run.

I might try it again down the road to see how I've changed. Or maybe I'll get one of those finger prick glucose monitors to spot check when I'm feeling strange.

The biggest change I've made after all this: I don't put sugar in my coffee anymore. This was adding two sharp glucose spikes early in my day every day, so cutting those out was a quick win. Now, I tend to stay relatively flat through the day with a moderate, slower increase after dinner.


Here are two line graphs, one from my best day, and the other from my worst day (the first spike was oatmeal, and the dinner spike is from pizza, and apparently it even broke the monitor!):

My blood sugar charts from two days

For comparison, here are two sample days sent to me from RevK who is a diabetic:

My blood sugar charts from two days
Published on April 2nd, 2023. © Jesse Skinner

How I use GitHub Copilot to be more productive

GitHub Copilot is a VS Code extension that brings machine learning into your development environment. It will upload snippets of your code to Microsoft's servers, and send back a list of suggestions of what it predicts will come next.

Some people have wondered whether our jobs as developers are doomed, now that machine learning can write code for us. Will computers be able to write all the code in the future without needing developers involved? I really don't think this will happen, but I do think our jobs will get a bit easier with help from tools like Copilot.

"Copilot" is a really good name for the tool, because although it won't write all your code for you anytime soon, it makes very helpful suggestions most of the time. Often, you'll have to make some tweaks to the suggestion to get it working correctly. You're still the pilot here, but Copilot is sitting beside you actively trying to make your life easier.

When I started using Copilot, I thought it was super creepy. I could write comments and Copilot would suggest code that does what the comment says. I'd never seen anything like this before. I also had mixed feelings about using code that seemed like it might be plagiarised directly from some GitHub project.

Three months later, it has become fully integrated into my development workflow. When I'm coding somewhere without Internet access, I'll find myself briefly pausing to see what Copilot suggests, only to realise that I'm on my own.

Generally, I write code the way I used to before, and GitHub Copilot will suggest just a few lines of code for me at a time. Much of the time, the suggestion is almost exactly what I would have typed anyway.

Even though I've been coding professionally for decades, Copilot has made me even more productive. Here are a few ways that Copilot has changed the way I work.

Don't repeat yourself, let Copilot do it for you

Probably the most reliable use of Copilot is to set up some kind of pattern and allow Copilot to repeat the pattern for you.

For example, I never have to type out something like a list of months. I can just write a descriptive variable name, and Copilot will suggest an array for me:

const MONTHS = // ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'];

If you want a different format of month, you just give it an example and Copilot will suggest the rest:

const MONTHS = ['Jan.', // 'Feb.', 'Mar.', 'Apr.', 'May', 'Jun.', 'Jul.', 'Aug.', 'Sep.', 'Oct.', 'Nov.', 'Dec.'];

Notice how "May" doesn't even have a period after it? Copilot is surprisingly good at this sort of autocomplete.

In other cases, where your code has a repetitive nature to it, but you maybe don't want to over-complicate things by writing a loop, Copilot can save you the hassle. For example, if you're creating an object with property names, and the values use the name in some kind of pattern, give an initial example, Copilot will do the rest for you:

return {
    age: model.getAge(),
    address: // model.getAddress(),

With this sort of pattern, I go one at a time, pausing briefly after each property name and hitting TAB to autocomplete once Copilot figures it out. It saves me some typing and the inevitable typos too.

It finishes my sentences

VS Code is already quite good at using Intellisense to make useful suggestions, or to close parentheses and brackets for me. Copilot takes that to the next level, and often suggests the whole rest of the line for me. Sometimes it's wrong, but often it's exactly right.

For example, if I'm writing some filter statement, Copilot will make a good guess as to how the rest of it will look:

const activeItems = items.filter( // item => item.active);

Good guess! But if that's not how I named my variable, I might keep typing to give it more context:

const activeItems = items.filter(item => item.status // === 'active');

The more context Copilot has, the more likely it will guess correctly. At some point, Copilot generally figures out exactly what I was about to type, and when it does I can just hit TAB and move on to the next line. It's trying to read my mind, and when it gets it right, that means fewer keystrokes and probably fewer typos too.

Even if it only ends up suggesting a couple closing parentheses and a semicolon, I'm happy for the help.

Naming things is easier

Phil Karlton famously said that the two hardest problems in computer science are cache invalidation and naming things. Copilot makes at least one of these a bit easier.

You saw in the previous example, that when I was filtering on an array of items, Copilot suggested item as the argument in the filter function. This is a simple example of where Copilot gets things right almost every time.

Usually I'm not too picky about function or variable names, so if Copilot comes up with something half-decent, I'll go with it. I also think that it's probably well informed by what many others have chosen before, and I think familiar variable names are probably better names anyway.

For example, if I'm about to write a SQL query, Copilot will make up the table and column names for me as good or better than I will:

async function addBook(params) {
    await query( // `INSERT INTO books (title, author, isbn, description, image, price, stock) VALUES ('${params.title}', '${params.author}', '${params.isbn}', '${params.description}', '${params.image}', '${params.price}', '${params.stock}')`);

Wow, it did that with no context other than "book". Obviously there must be a bunch of open source code or demos that work with books in a database. Those might not be the names I end up choosing, and I probably won't need that exact list of columns, but often they'll be pretty good, and might make me think of something I'd otherwise miss. More than once, Copilot has suggested better names than I had planned on using.

But notice that there's something really dangerous in this suggestion! The values are being inserted into the database without being escaped! This is a security flaw, and a major reason why you still need to be careful and knowledgeable about the code that Copilot is suggesting. It won't always be secure, or even good. You still need to be the pilot here. Same goes with any code you find online.

I don't need Stack Overflow as often

Speaking of finding code online, I rarely have to actively go out and search on Stack Overflow or GitHub anymore. If I'm not sure how to do something, I'll use a descriptive function name or write a comment that describes what I'm trying to do. Most of the time, Copilot will make a useful suggestion. More than once, I've learned something new from the suggested code.

function getRandomNumber(min, max) {
    // return Math.floor(Math.random() * (max - min + 1)) + min;

I think of Copilot as searching through publicly available source code for me, to find examples where someone else has done the exact same thing before. I really don't want to accidentally plagiarise a large block of code from GitHub without knowing where it came from. But I will let Copilot suggest a line or two of code for me, that does something a bit tricky but fundamental, where the code would have been essentially identical had I worked it out on my own.

It knows how I intended to use that new variable

When I'm writing a function and declare a variable, Copilot will often have a good idea about how I intended to use that variable. Often, the suggestion here will be bang on.

function processItems(items) {
    const activeItems = 0;

    for (const item of items) {
        // if (item.active) {
        //     activeItems++;
        // }

Sometimes, I'll forget about a variable I'd declared earlier, so Copilot's suggestion ends up being a friendly reminder to use the variable. Even if the suggestion is wrong, the reminder ends up being useful.

Test-driven Copilot Development

The most surprising thing that Copilot has done, was suggest for me all the code I needed to make my tests pass, even though the tests were in a different file! I didn't know Copilot was capable of this, so it was a big surprise when it happened.

Let's say this was your test file, model.test.js:

import Model from './model.js';
import { deepEqual } from 'assert';

it('should return an empty array by default', () => {
    const model = new Model();

    deepEqual(model.get(), []);
});

it('should double the numbers provided', () => {
    const model = new Model();

    model.add(7);

    deepEqual(model.get(), [14]);
});

Open up a blank new file model.js, type export default and Copilot will automatically suggest the rest:

export default // class Model {
    // constructor() {
    //    this.data = [];
    // }

    // get() {
    //    return this.data;
    // }

    // add(number) {
    //     this.data.push(number * 2);
    // }
// }

Tests pass without writing any code! This brings Test-Driven Development to a whole new level, where you can focus on writing good tests, and Copilot will write the code that passes the tests! It doesn't always work this well, but when it does, you can't help but giggle in delight.

Conclusion

When I first tried Copilot, I thought it was super creepy. Now, I see Copilot as my delightful junior assistant, the two of us collaborating on writing code. The more predictable you can be, by using descriptive function names and variables, the more likely Copilot will correctly predict what you're trying to do.

As I write this, Copilot is still in Technical Preview, and so you have to apply to be on the wait list. I only had to wait a day when I applied, but you may have to wait longer or might not be approved at all. One day, Copilot will likely cost money to use. I think I'll probably be willing to pay for it, because it does save me time and energy, ultimately making my services more valuable.

I hope you get a chance to try out Copilot for yourself. It's fun to use, and can even make you a more productive programmer too.

Published on March 1st, 2022. © Jesse Skinner

How to add gravatars to your web page or blog

Last night, I added gravatars to the comments on this site. The best blog post to see them is Blog Tipping.

Gravatars are globally recognized avatars. It's a clever way to let people upload a little picture that goes beside their comments. This way, people only have to add them once (at the Gravatar website) and they automatically work across all web sites implementing them. They use the MD5 hash of your email address as an id, so you only need to put in the same email address when you write comments, and your gravatar will come.

Implementation is very easy. You don't need to make any HTTP request on the server or anything. You only need to add an image that points at the gravatar URL, with some optional parameters, and the browsers of people visiting your site will download the images from gravatar.com.

The only required parameter is gravatar_id, the md5 hash of the email address. Every programming language has a way of finding an MD5 hash. PHP probably has the most simple example:

<img src="http://www.gravatar.com/avatar.php?gravatar_id="/>

There are also optional parameters you can use. rating lets you make sure there are no pornographic or mature images on your site. size lets you set the width/height to something other than 80px. default lets you set the URL of a default image, in case you want to display something for people without a gravatar (a 1x1 invisible image is the default). border is supposed to let you set the colour of a 1px border on the image, though I wasn't able to get this working. So the output of a complex example might look like this:

<img src="http://www.gravatar.com/avatar.php?size=50&border=000&gravatar_id=eb22390416c3b62025dc9ad2120a8ade"/>

It's worth noting that adding gravatars can be a css and design challenge. You don't know whether the image will be a real gravatar or just a 1x1 invisible image. If you add margins or padding to the gravatar, they will end up on the invisible pixel as well, and this can look weird. I solved it by indenting all comments 60px to allow space for a 50px image. If the image isn't there, the comment is just indented anyway. You can also solve this by setting a default image to a "No gravatar" image.

Gravatars can really be implemented anywhere you know an email address and may want to show an image. There is a Thunderbird add-on that lets you see Gravatars while you're reading emails.

For more details, or to see how to implement gravatars in other languages and blog publishing software, the Gravatar web site has a detailed implementation guide. If you just want to upload your own gravatar, go sign up.

Published on August 23rd, 2006. © Jesse Skinner