Sebastifact: A fact machine for 7 year olds

My son is 7, an age where he is becoming interested in what I do at work. He ‘gets’ the idea of apps and websites, but I wanted to put together a very simple project that we could build together so that he could see how to take an idea and turn it into a real “thing”.

We brainstormed some ideas – he loves writing lists of facts and finding pictures to go with them with the ambition of building an encyclopaedia, so we started work on a simple website that he could type the name of a historical person into which would then return a set of 10 facts to him.

His design goal was pretty simple - the website should be yellow. I decided that it was probably worth sticking to focussing on the functionality, so yellow it is.

As is usually the case, the backend is where the action is. I wasn’t sure how to explain to him just how complex this site would have been to make just 2 years ago - the idea of entering almost any historical figure into a website and having it simply provide 10 facts back would have been a hugely laborious process with many years of contributions and yet, here we are in 2023 and the solution is “just plug a Large Language Model into it”. This results in a pretty easy introductory project. We talked a little about how an API works and how computers can talk to each other and give each other instructions, and then set to work writing the prompt that we wanted to use.

As we tested it, he started to ask if it would work for animals too. And then mythical beasts. And then countries. Seeing him working through the ideas and realising that he could widen the scope was great. This is the eventual prompt we settled on:

You are an assistant to a family. Please provide responses in HTML. The User will provide a Historical Person, Country, Animal or Mythical Beast. Please provide 10 facts appropriate for a 7 year old. If the user provided name is not a Historical Person who is dead, a Country, an Animal or Mythical Beast please respond with an error message and do not respond with facts.

By asking the LLM to provide responses in HTML we offloaded the task of formatting the output and GPT-3.5 Turbo is pretty good at providing actual HTML. I haven’t seen any issues with it yet. By instructing it to make sure the facts are appropriate for a 7-year-old, the tone of the facts changed and we got facts that were (surprise) actually interesting for him without being too pointed in their accuracy.

The response takes a few seconds to come back so I implemented caching on the requests - the most popular searches appear instantly. Ideally in the future, I’ll give all of the results URLs.

As a final bonus, I plugged in the Unsplash API to return images for him. It doesn’t always work (Unsplash apparently has relatively few pictures of Plato) but for most searches, it provides a suitable image. I might consider changing that to use the Dall-e API, but I think for now this is good enough.

There were two takeaways from this. Working with a 7-year-old is a test in scope creep. I wanted to keep this to an afternoon activity so that it would keep his interest but of course, it could have been a much larger site if we had incorporated all of the ideas. Giving him something that he had actually built was the important goal for me and keeping the scope to something achievable whilst also feeling like something that is his own was the most important thing. The second is something that I hammer on about all the time: an LLM is a massive toolbox that can help users achieve almost anything, but there is great value in providing a User Interface that allows a user to achieve a very specific task. There is a good reason why there are so many kitchen gadgets that can basically be replaced with a single knife - the user experience of using a dedicated tool that requires less skill is better.

The site is at https://www.sebastifact.com

Share on: