Artificial Intelligence

This page is a collection of my thoughts on generative AI, and artificial intelligence.

It's becoming more and more important to have muscle memory for using LLMs - this is where I catalogue my learnings and mental models.

I'm not an expert, but I'm trying to learn.

Stack

cursor
v0
openai

There are a lot of tools out there, and it's easy to get "lost in the sauce" of it all. I've boiled by generative AI toolset to a choice few tools that I use for different problems.

Writing

I explicitly do not use LLMs or generative AI for writing. I find that it's too easy to get distracted by the suggestions, and it's hard to get into the flow.

To ensure distraction free writing, I use Obsidian as my note taking app.

Chat LLMs

I use OpenAI for all my chat LLM needs. Primarily, I use o3 for advanced reasoning (code, math, etc) and gpt-4o for general purpose tasks.

I also use Anthropic for some tasks.

Code LLMs

I use Cursor for all my programming (TypeScript, Go and Java). I default to using claude-3.7-sonnet. I've been using Cursor for over a year now, and its well worth the yearly subscription.

For prototyping web-UI, I use v0. For mobile UI, because I am using React Native, I still use v0, but I translate the NextJS code to React Native using Cursor agent mode.

Image Generation

For image generation, I use OpenAI . gpt-image-1 is good at generating realistic images, and performs surprisingly well for generating text in images.

Timeline

2025

back-to-building

This year, I'm building again.

I've got two experiments on the go:

  • MeetSova - a mobile application that focuses on AI literacy
  • SwipeCraft - An experimental framework for generating social-media ready content with AI Avatars, using Wan2.1 and ElevenLabs

2024

learning-more

I had felt that in the previous years, I wasn't really getting anywhere with GenAI, and I self-assessed that this was because I was trying to use the technology in a very shallow way.

I decided to take a step back, and focus on understanding the technology, and how it works.

I spent this year reading papers, and building out a few experiments. No real product or revenue ambitions, just really understand the tech, and how to use it well.

I did do a hackathon at work, where I built a codebase indexing tool with RAG, providing Q&A over internal codebases using Ollama, LangChain & Chrome.

2023

building-with-ai

In 2023, I ran two experiments using LLMs and commoditised AI services:

  • WhatsCovered - an insurance claims chatbot (RAG)
  • TranscribeAudio - a tool to transcribe audio files

RAG had just started to become a thing, and I was excited to try it out. That's what I did for WhatsCovered. Got to grips with LangChain and Chroma.

For TranscribeAudio, I used AssemblyAI to transcribe audio files. This project actually ended up doing $200 in MRR for ~6 months!

2022

ai-consumer

In 2022 I went "all in" on being a consumer of generative AI. I used ChatGPT and Copilot extensively to improve the quality of my web scraping side projects.

I also spent a bunch of time writing migration utilities between Obsidian and Notion.

2021

getting started

In mid- 2021, I got access to the GitHub Copilot Technical Preview. I used that for almost 2 full years in vscode, and found it plenty useful.

CoPilot was great at finishing off a function if you left a descriptive comment, but it felt oblivious to the codebase as a whole.

I also got access to the OpenAI API for GPT-3 and Codex. With a friend, I also tried to build an e-learning platform with a strong GPT-3 integration.

Needless to say, we didn't get anywhere meaningful, and decided that this "LLM stuff" wasn't there just yet.