<![CDATA[RichStone Input Output]]>https://richstone.io/https://richstone.io/favicon.pngRichStone Input Outputhttps://richstone.io/Ghost 6.11Sat, 17 Jan 2026 17:04:31 GMT60<![CDATA[What to expect from your blog & how to close it gracefully in the end]]>Yesterday, I saw in my inbox that richstone.io is expiring. So I did what any reasonable person would do in 2026: I talked to Claude Desktop about it.

After a good chat, we decided not to pay €60 for the next season of the richstone.io domain, nor

]]>
https://richstone.io/what-to-expect-from-your-blog-how-to-close-it-gracefully-in-the-end/696bb5bda0f023000105ded9Sat, 17 Jan 2026 17:03:58 GMTYesterday, I saw in my inbox that richstone.io is expiring. So I did what any reasonable person would do in 2026: I talked to Claude Desktop about it.

After a good chat, we decided not to pay €60 for the next season of the richstone.io domain, nor the €13/month in hosting costs. But money isn't really the reason. It's more about the cost of focus and doubling down on impacting what already works.

If you think about starting a blog, here's a little recap of the past 5 years I hope you can learn from. If you are in the closing stage of a blog that served you and others, there will be a how-to on archiving your treasures gracefully at the end of it.

The Journey

RichStone I/O was born in February 2021, about four years after my data-driven engineering blog at richstone.github.io that I did during my computer science studies. Back then, the goal was to level up in my Ruby journey and help other developers who are maybe a step or two behind me get better with Ruby on Rails, APIs, testing and nowadays also AI.

It was a lot of learning and fun doing it. It gave me direction, reflection and purpose.

I made tons of connections. Many people who contacted me through the blog or responded to my newsletter became friends or customers over the years. I had people tell me they reached out because of the silly pictures I draw myself. And whenever I got into a new job, my employer would mention that the blog was one of the reasons they wanted to learn more about me before actually talking to me.

The Specifics

The personal value has been hard to measure, but if I were to describe it with one word: Great. The value for others was OKish from the occasional feedback I've gotten over the years.

In terms of numbers, I watched them every now and then. I can definitely say that when I posted daily or weekly, traffic went up considerably. Otherwise, it trended down, with recent years being worse, given the power of LLM search.

I think Google Analytics doesn't currently go more than the last calendar year, where I had 16K "users" (unique visitors?). I got most reads from the US, China, Singapore, Germany, India, UK, Canada...

286 people are on the subscriber list. Some of them might be spam sign-ups (I had some clean-ups in the past, but Ghost doesn't make it particularly easy to manage dead subscribers). On the bright side, I consistently get 40-50% open rates, which is actually solid.

My guess why this subscriber count over 4 years and ~100 posts:

  • My content tried to solve problems but was never solving huge pains. Some posts were OK, there is also a small percentage of posts that were personal updates (I think I rage deleted most of them at some point :D).
  • I've rarely done any serious list building (like connecting with other creator's audiences or lead magnets).

I did some sharing on socials, a few posts went trending on Reddit, got decent engagement on LinkedIn a couple times, but this isn't a good marketing strategy for 2026.

The truth is, writing and growing a blog is challenging. And you might get to a point where you want your actions and time to have increasing focus and impact, which is where I am.

I'm also increasingly bearish on text content given the effort it requires versus the potential upside. Video, live groups, events — I'm more bullish on those formats. They're a lot of effort too, but the connection is more direct, the feedback loop is faster.

The great marketer that I obviously am 😄, I'll save the email list and move it to funnelsonrails.com — a project I'll still dedicate some time to this year, so you might hear from me there (unsubscribe anytime, as always). The rest of the time, I will be building AI and integration systems for real products I'm working on most of my day and helping businesses use them (currently, ClickFunnels, my main contract).

(the below is mostly Claude-generated, but it has a fun link to a copy of my website here, archiving a blog on Netlify doesn't take more than 5 minutes 😮)


How To Archive Your Blog

Step 1: Write a Goodbye Post

You're reading it. Let your readers know what's happening and where they can find you next. Don't just disappear.

Step 2: Export Your Content

You have two main options here, depending on your setup.


Option A: The wget Mirror (should work for most blogs more or less)

This is the quick-and-dirty approach. You're essentially taking a snapshot of your entire site as static HTML files.

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent -e robots=off https://yourblog.com/

What this does:

  • --mirror — Downloads the whole site recursively
  • --convert-links — Rewrites links to work locally
  • --adjust-extension — Adds .html extensions
  • --page-requisites — Grabs CSS, JS, images
  • --no-parent — Stays within your site
  • -e robots=off — Ignores robots.txt

The catch: If your blog uses a CDN for images (like Ghost blogs on Digital Press), you'll need to also grab those assets:

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent -e robots=off --span-hosts --domains=yourblog.com,your-cdn-domain.com https://yourblog.com/

Where to host it for free: Drop the folder into Netlify (just drag and drop!) or push to GitHub Pages. You'll get a free subdomain like yourblog.netlify.app.

I tried this approach. It kind of worked. Title images showed up, navigation worked, but in-post images were broken. Doesn't work for Ghost blogs that have images inside of posts (which I have plenty of).

Still, Netlify's drop-to-deploy is pretty impressive! Check for yourself:

https://richstone.netlify.app/


Option B: Ghost Export (For Ghost Blogs)

If you're on Ghost (like I was), you have a cleaner option.

1. Export your content:

  • Go to Ghost Admin → Settings → Export
  • Download the JSON file (contains all your posts, settings, metadata)

2. Export your images:

  • For me, I need to get in touch with my Ghost hosting provider.

3. Save your subscriber list:

  • Ghost Admin → Members → Export

4. Store it all:

  • Create a folder in Google Drive or wherever you keep important stuff
  • Put the JSON export, images, and member CSV in there

I'm hosting my blog on DigitalPress, you need to ask for an image export specifically.

Later, you can import this somewhere. I know that if you gonna host with ghost.org, they will help you map your JSON export and images together, at least so they told me.


That's a Wrap

Thanks for reading RichStone I/O over the years!

You can still find me:

See you around! 🧡

]]>
<![CDATA[[8/4] How a Scotsman saved hours of my time by turning an LLM into my virtual assistant]]>https://richstone.io/8-4-how-a-scotsman-saved-hours-of-my-time-by-turning-an-llm-into-my-virtual-assistant/68fe6a360b7bf50001d97fd2Sat, 27 Dec 2025 06:41:22 GMT

Sometimes you get stuck thinking about less-than-ideal solutions, missing the big green elephant staring right at you. I had a nice win in a short time that I didn't want to leave unshared.

I had used a bookkeeping app for several years to file my taxes. Once I decided to switch accountants, I found that this application isn't very interested in you leaving, so it doesn't offer an "export all functionality" option. (a black-hat churn buster right there for you SaaS builders!)

Here are a couple of these screens:

[8/4] How a Scotsman saved hours of my time by turning an LLM into my virtual assistant
[8/4] How a Scotsman saved hours of my time by turning an LLM into my virtual assistant

Basically, I stood before a gigantic mechanical task: clicking through dozens of pages and downloading hundreds of invoices I had uploaded manually for the past years.

My first thought was that there is no way I'll want to make time for that. But if I don't do it, who will? A virtual assistant, of course! Just recently, I spoke with an entrepreneur buddy of mine who works in the virtual assistant business. He spoke very highly of virtual assistants from the Philippines and recommended a Facebook group for finding trusted people.

So the solution was obvious: find a virtual assistant! I still hesitated, though, because there was another daunting task: finding the right one you could trust to pay for the job and to share your accounting data and passwords with.

Luckily, a Scottish man came to Barcelona for a day of co-working and enlightened me:

"Why don't you use a browser script or ChatGPT Atlas?" - A Scotsman. 🏴󠁧󠁢󠁳󠁣󠁴󠁿

There it dawned on me. What kind of an engineer was I? My recent conversations got me stuck thinking like a businessman, a very lousy one, too, pre-November 2022 at the very least.

ChatGPT Atlas just launched, but it had two fatal flaws at the time: No image upload and no way to download an image.

We prioritized safety as we built ChatGPT’s agent capabilities in Atlas, and added safeguards to address new risks that can come from access to logged-in sites and browsing history while taking actions on your behalf, for example:
- It cannot run code in the browser, download files, or install extensions
- It cannot access other apps on your computer or file system

Basically, it couldn't do anything.

So I took Claude Desktop from its dusty shelf and started vibing. I was close to getting frustrated on the fifth prompt, but Claude convinced me to push on!

[8/4] How a Scotsman saved hours of my time by turning an LLM into my virtual assistant

I believed the robot, took a step back and asked it to make it work for one row first. After which, it was clear that success is imminent.

We also both got a bit too excited as the vibe continued in a positive direction:

[8/4] How a Scotsman saved hours of my time by turning an LLM into my virtual assistant

Here's the whole vibe trip. You can skim through my 12 short prompts on the right to see the story unfold:

Browser button click automation
Shared via Claude, an AI assistant from Anthropic

Turned out that the virtual assistant was Claude after all.

So a laborious task turned into a fun engineering challenge. I tracked down the total prompting and thinking time to 25 minutes, which is nothing compared to hours of rote doom-clicking through a 1000 links or interviewing people. Obviously, a virtual assistant is still great for other tasks, but in the end, a good assistant would have vibed out a script like this as well.

The lesson here is that there is nothing like solving a task by guiding the LLM with small, gradually increasing steps and having a Scotsman on your side! (thank you, bud 🧡)

💡
P.S.: In my engineering practice, I'm turning away from prompting, towards agentic workflows. I try not to prompt or micro-manage the agent too much for the initial task. Instead, I figure out a PLAN.md with success criteria and let the agent do its thing until it's done. I recently started using the Playwright MCP server after a recommendation from an American man for browser-based success criteria. In hindsight, I would have used that with Claude Code instead and let it figure things out itself. But this is a topic for the next post.
]]>
<![CDATA[[7/4] Speak to LLMs with voice-to-text]]>This small practice made me more productive and happy as a software engineer. If you chat with LLMs && you are in a room where you can speak to your computer && you are not speaking to your computer, you might be missing out.

Using a voice-to-text tool

]]>
https://richstone.io/7-4-speak-to-llms-with-voice-to-text/69492d020b7bf50001d989fdMon, 22 Dec 2025 11:36:17 GMT

This small practice made me more productive and happy as a software engineer. If you chat with LLMs && you are in a room where you can speak to your computer && you are not speaking to your computer, you might be missing out.

Using a voice-to-text tool is excellent for software engineering and knowledge work for several reasons. Let's check those reasons with examples and look at the tools I used to give myself some engineering boosts.

Brain dump LLMs

War story: Once, we forgot to record or transcribe a meeting where we discussed a bunch of action points and a summary we needed to forward to the CEO. The CTO brain-dumped everything he remembered from the convo to ChatGPT and asked it to provide a summary and the action points for each team member. When I reviewed the summary and action points, they were exactly to the point. That might have turned out differently. When you speak, you brain-dump it freely; when you have to write yet another thing that looks shareable, this is a whole other level of effort.

War story 2: Once I had a homework (that I created myself for the Rails Builders group) to define an ICP (Ideal Customer Profile). I did 80%-90% of that homework while walking somewhere and answered all the questions. Out came two nice ICP versions in markdown format for Funnels on Rails, which I then just needed to import into Google Docs and tweak the details of (off topic: ICP helps immensely when shaping your product and conceiving your marketing/sales messaging). Without voice-to-text, that homework might never have been done.

Generally, I use chat conversations, technical notes, and text-to-voice brain dumps on what I know about an issue to create a PLAN.md to resolve it, or as a prompt for Claude Code.

The cool thing is that if you have a high-quality voice-to-text generator, you rarely need to verify the results you pass to LLMs. LLMs are good at inferring the actual thing you wanted to say if there is a typo or if voice-to-text misinterpreted what you spoke.

Repetitive and quick inputs

There are some things I often tell the LLM that it likes to forget. If I tell Claude Code to just "implement the code for the PLAN.md" it will frequently forget these bits from the CLAUDE.md:

## Worfklows
[...]
### Git

- Create atomic commits for the different steps that you work off. Atomic commits doesn't mean small commits. They can be bigger commits with everything that belongs together to achieve the goal of the commit.
[...]

### Testing

- Run relevant tests after each finished work item.

If, on the other hand, I tell it to "implement the code for the PLAN.md, make sure to run the tests and commit after each step, otherwise your work will not be accepted", then there is a 95% probability that Claude Code will do the right thing.

It'd be frustrating to type this in every time.

Need for speed

It's also often just much faster than typing. Could you test your typing speed?

I usually don't perform very well on these tests, though I've been doing touch typing for 15 years. If you never practice speed, you'll never get very speedy. My max is at about 60-70 WPM. Here is a spontaneous WPM I just did for you, which is abysmal?

[7/4] Speak to LLMs with voice-to-text
typing.com 1 minute test

I'm also a relatively slow talker and take my time thinking with voice-to-text, but my text-to-voice app still shows me 80-95 WPM averages still.

Managing energy levels

My brain sometimes reaches a point where it's easier to say something than to write it. For example, after a full day of good work and shipping, it can be daunting to sit down for another hour at your computer and start writing that prompt for your side project.

Voice-to-text is a great way to manage these breaking points and turn them into an opportunity to express your thoughts and ideas differently. Doesn't seem like a big thing, but if you want to max out on your output some days, it is.

Texting someone on the go

Life gets busy, and you might need to communicate while walking. Some conversations are easily dealt with by a voice message. But others aren't a good candidate, either, because the person on the other end prefers text messages or needs to get a text message for other reasons. Having a great voice-to-text on your mobile would be really nice. I have an iPhone, which is OKish with voice-to-text, but it isn't a tool that nails it.

Tooling

This is not a comprehensive guide, just a personal recommendation based on personal experience.

WisprFlow

My preference is WispFlow. I press CTRL+SHIFT to record something quickly and release; it then automatically pastes wherever the focus of the cursor is. CTRL+SHIFT+SPACE is for longer rants.

[7/4] Speak to LLMs with voice-to-text
WisprFlow is telling me it's recording at the bottom of the screen.

The other cool thing is that it has its own clipboard manager, not polluting your regular clipboard. You can always paste your last input with CTRL+CMD+V or go back to the app to find something you said.

[7/4] Speak to LLMs with voice-to-text
You can see, I didn't work much "YESTERDAY" and I already spoke a little book into WisprFlow - 44k words in not event two months.

The mobile app has its quirks, but works better for me than the built-in iOS voice-to-text of my keyboards.

They have a free tier to see if this whole voice-to-text things is something for you, and you can get a month for free (and I think me too) with my link here.

Superwhisper

I know a lot of people use Superwhisper, which has a free open-source version. I tried the free tier of the paid version and did not become friends with the shortcuts on desktop and I don't think it has the same feature set as I described above for WisprFlow. It also has a mobile app, which didn't work on my iPhone at the time at all.

Claude Desktop

One of the Claude Desktop brought an intrusive shortcut for speaking to Claude directly using the Caps Lock key. I wondered if I can misuse it as a free app for voice-to-text, but the quality of the generated text was unusably low. Here is an example generated for you right now:

[7/4] Speak to LLMs with voice-to-text
Claude Code and WisprFlow competing in a voice-to-text test.

This is the exact text I just spoke, generated by WisprFlow:

[7/4] Speak to LLMs with voice-to-text

This is what Claude Code generated:

[7/4] Speak to LLMs with voice-to-text
Claude must be thinking I'm speaking a different language.

Google Docs

A thing I recently learned from Aaron Francis in his screencasting course is that he uses Google Docs voice-to-text to do a first run of his screencasts before actually recording anything. So, this might cover your use case, too, keeping things extremely simple.


If you aren't using voice-to-text for coding and other knowledge work yet, this is the time to try it. The tools are there and it's giving coding another fun spin, further turning you into a ship-machine.

[7/4] Speak to LLMs with voice-to-text
]]>
<![CDATA[[6/4] git worktrees with parallel agents in practice]]>This is not a comprehensive guide to Git worktrees, but I wanted to share how I'm currently using them to help you work more effectively. I also want to have a snapshot to compare against in a year, when multi-agent workflows will become more critical.

If you want

]]>
https://richstone.io/6-4-git-worktrees-with-parallel-agents-in-practice/691a494e0b7bf50001d9829dThu, 11 Dec 2025 08:57:31 GMT

This is not a comprehensive guide to Git worktrees, but I wanted to share how I'm currently using them to help you work more effectively. I also want to have a snapshot to compare against in a year, when multi-agent workflows will become more critical.

If you want to know how worktrees work fundamentally, check out [4/4] Code with LLMs in parallel. It starts with the basics of how worktrees work at the Git level and walks through the tools I don't use and why. In this post, you will see what I use almost daily and how.

"State of the Art"

I'll be a bit of a downer here because I think there is currently a mismatch between the potential parallel-agent capabilities of AI, the skill development of the regular engineer (me) and how AI agents work right now. Let me illustrate:

Expectation of the regular, ambitious engineer: I will run as many agents as possible to build and improve the product as never before.

Reality 1: There is probably a skill issue. Most tasks, even well-planned ones with big PLAN.mds and running tests as the success criteria, don't require as much time for parallel agents to be valuable. Context switching every five minutes will kill your brain.

Reality 2: It's probably also an agent issue. Claude Code and co. are likely designed to get things done quickly and exit early when in doubt. At least this is my experience for even big creative tasks. I had Claude Code running for 20-30 minutes at a time for very repetitive work with a recipe, à la "Repeat this process for these 30 files".

So, where do I still find value in git worktrees for running agents in parallel?

I use worktrees sometimes daily, sometimes weekly now, depending on what I'm working on.

Mostly, it's helpful to me to have 1-2 simultaneous planning tasks in a separate worktree so my current branch stays unblocked. A rarer use case is to run them on longer-running development tasks, where I know they will take longer and I have enough guardrails not to need to babysit them. 

The available tools, such as Conductor, still don't fully solve it for me. As I mentioned in [4/4] Code with LLMs in parallel, the dangerous permissions issue and other UX issues remain deal-breakers. Instead, I've cobbled two things together to make it work for me.

1. Launching multiple agents with a script

Created a way to launch multiple agents with a Ruby script that launches prompts in one or more worktrees with or without Ultrathink. This is useful for A/B versions of more complex features where I want to evaluate different approaches. Still, I also use it to kick off a planning task and explore multiple possible solutions. 

This script, with the `-ab` flag, will start a prompt in two worktrees. One prompt will be ultrathinking. Ask both to give you 3-5 solutions to a complex problem, so you have even more options to look at and narrow it down into some of those.

It's currently --dangerously-skip-permissions mode only, so make sure you execute it in a safe environment.

So, you'd run it in your repo like:

funnels-on-rails on  on  main ⌚ 9:13:29
$ ./bin/parallel-agents/worktree_run_ghostty -ab --setup-type full "write a Chello world in brainfuck; commit your changes if you have been ultrathinking, otherwise do not commit"

It will create two separate trees, each running in a new terminal instance.

[6/4] git worktrees with parallel agents in practice

It's also currently copying over ignored Git files manually, so you may theoretically end up with a functional environment on your worktree. I say "theoretically" because it does not work for all apps I tested.

It's open source on GitHub if you want to look at some vibe-coded stuff here.

2. Managing worktrees created as part of the script run

I found a JetBrains plugin that lets me manage worktrees directly in the IDE, which I find very useful for day-to-day operations.

The cool thing here is the Open Worktree feature that opens the worktree in a new IDE instance and let's you check the code, run tests and check your changes with the app running if it wasn't just a planning task.

[6/4] git worktrees with parallel agents in practice

Cleanup

At some point, you need to clean up your trees. 🪓

The Delete Worktree button above doesn't let you batch delete them and also doesn't remove the respective branches, so I usually end up screenshotting the list of worktrees above and make it a task for Claude Code:

[6/4] git worktrees with parallel agents in practice

That's all I have for now regarding the trees. I'm really looking forward to where this is going, though. My hunch is that we will grow into more skilled engineers who create better plans with stronger success criteria, resulting in longer-running one-off parallel agent solutions that we will manage conveniently through some dope UI tools.

]]>
<![CDATA[[5/4] Code with LLMs and strong Success Criteria]]>A buddy of mine from some far cold coasts recently visited me in my hometown. He mentioned that he was using some bits from my [1/4] article on LLMs about coding with a PLAN.md. Which is fantastic, because that's what I'm writing this stuff

]]>
https://richstone.io/5-4-code-with-llms-and-strong-success-criteria/68fe69e90b7bf50001d97fc8Sun, 02 Nov 2025 19:13:40 GMT

A buddy of mine from some far cold coasts recently visited me in my hometown. He mentioned that he was using some bits from my [1/4] article on LLMs about coding with a PLAN.md. Which is fantastic, because that's what I'm writing this stuff for!

BUT. He also mentioned that he doesn't let Claude Code --very-dangerously-execute-tests, which is a pity because I find that this is where the whole Claude Code juice hides. It gives the LLM a chance to find its own bugs, which it will inevitably introduce. You know, those nifty LLM bugs that are extremely hard to notice and debug.

So I wanted to make this point again in its own post:

You gotta give the LLM an actionable Success Criteria that will help it self-recover, self-debug and self-fix.

Otherwise, with just a PLAN.md, you are still in the ancient Cursor and Copilot lands six months ago where you let it generate code, maybe just a bit more reliable.

I have a special Success Criteria section in my PLAN.md:

# The Good Feature

[...] 

## Success Criteria

Create and run this new test until successful:

- test/controllers/goods_controller_test.rb

Adjust this related test until successful:

- test/controllers/foosels_controller_test.rb

Without it, your work won't be accepted.

The important part with an "actionable" success criteria is that the LLM gets some kind of feedback whether it's successful or not. You are enough of a bottleneck, you don't want to always also be its feedback bottleneck. Here are a few more examples:

  • Implement according to the plan until all the shell script's branches run without an error.
  • Fix the error and make sure a test in test/models/barber.rb run successfully. The file is quite large so it's fine if you just run the line of the newly created regression test.
  • [... API integration...] Create temporary scripts to make the API calls. Make sure to hit the API with actual requests via cURL and that the different API call sequences are successful.

I also frequently casually end my "Write the code to implement PLAN.md" prompts with:

"Make sure to run the mentioned tests successfully, without it your work cannot be accepted!!!?!??(#@$@)(#$&() aaahhh 🙀"

I guess (hope) it kinda helps. I do still notice that if the PLAN.md is quite large in scope that Claude gets kinda tired and just implements without giving a big shit about creating or running tests. I wonder where this tendency comes from? Definitely not us developers. Still, not a problem, since you've written the test scope down in the PLAN.md and you just ask it to finish the implementation by creating, adjusting and running relevant tests.

So let's get our criteria together and make that Claude Code more powerful than those pre-agentic Cursor workflows.

]]>
<![CDATA[[4/4] Code with LLMs in parallel]]>I have the strong feeling that this is one of the core skills to develop in the next couple of years as an engineer. The tooling is not great yet, but by the time you read this, it might be, so take this more as a primer and look under

]]>
https://richstone.io/4-4-code-with-llms-in-parallel/68a1c1592a0cad00018936e3Sun, 17 Aug 2025 14:31:26 GMT

I have the strong feeling that this is one of the core skills to develop in the next couple of years as an engineer. The tooling is not great yet, but by the time you read this, it might be, so take this more as a primer and look under the hood of how future tools will and current tools already work to run agents in parallel.

I tried four methods of running agents in parallel, and I will explain why I currently prefer custom scripts or manual management over existing GUI/TUI tools.

If you haven't delved into planning and swarms extensively, you may not have encountered the scenario where you need parallel workers. But once your agents start to take on bigger tasks, you might start thinking about it. I encourage you to go back and especially look into the PLAN.md -> Implement -> Iterate technique:

[1/4] Code with LLMs and a PLAN
I’m fortunate enough to work every day with the state-of-the-art coding agents, most recently Claude Code, and I’ve also explored various resources from developers who push the boundaries and produce significant work with top-tier agents. We all need to level up together in this wild west of AI coding and
[4/4] Code with LLMs in parallel
[3/4] Code with LLM teams
Your LLM agent will launch subagents to perform the tasks you give it in teams. Remember in [1/4] Code with LLMs and a PLAN I talked about how I tweaked an effectively working prompt to skip the step of asking the LLM to first add what it will do
[4/4] Code with LLMs in parallel

Once you have agents perform tasks that are longer than 10 minutes, you might be eligible to start accustoming your brain to the future of building.

So how does this work in its rawest form?

By now you might have heard of the git worktree command. It allows you to have multiple branches of the same repo checked out simultaneously. Here's how you would do it manually:

# from your project root
project> git worktree add ../project-parallel-1
project> cd ../project-parallel-1
project-parallel-1> claude "destroy computer" --dangerously-skip-permissions

Once in the new directory, you can start your LLM agent and give it some more long running tasks that won't conflict with what's ever going on in the main branch.

You will see project-parallel-1 as a new branch in your git viewer:

> git branch
  another-branch
+ project-parallel-1
+ project-parallel-2
+ project-parallel-3
  master
  more-branches

All the branches marked with a + are separate worktree branches. You can remove them by running git worktree remove project-parallel-1.

That's it. There are several gotchas here.

Depending on your local setup's complexity (think half-local half-Docker/Kubernetes setups) it's not a given that all your tests (unit, integration, system) will run on that worktree without extra commands or that you will be able to spin up a second instance of your app to do manual testing. So depending on what's going on in your main tree you might need to, again, wait until you can merge your changes in to test them. This makes quick iteration difficult.

Managing worktrees manually is cognitively taxing, so it needs to be really worth the effort and you kinda have to know that the agents will be busy for a while. I'm still not at that point, most of the tasks I give it are 10-30 minutes, including running slow tests as the verification mechanism.

I've usually used parallel agents to start new adjacent tasks or small bug fixes from a different topic or create a PLAN.md for the next task. I don't see value yet in tackling multiple projects alongside each other, for the same reason: it's too costly to context switch between two big tasks with the agents performing "small" 10-30 minute tasks.

I've never tried launching Claude Code for the same PLAN.md with multiple agents for tasks where you might be interested in different results (UI, code architecture approaches), but I find this very inspiring: The Future Of Building With Code. I think this is just a tooling problem and we are probably already close to having it solved.

So to solve the tooling problem and free up some mental space, there are some tools out there already. Here's why they don't work for me yet.

GUI Tool: Conductor

It's great to have a GUI for this whole thing. Conductor has some configuration options, but I might not have had enough patience with it and it might be dimensions better since I last used it, but looking at it now, it still won't work for me because I can't --dangerously-skip-permissions with it.

[4/4] Code with LLMs in parallel
:sadpanda:

You can create a worktree with a button click which is way cooler than the CLI worktree hassle:

[4/4] Code with LLMs in parallel

And you'll find yourself in a Claude chat wrapper and a diff viewer.

By default it puts your worktree into the same repo so files duplicate when you search for them in your main tree. I think this is configurable and you should take care of that first.

What's really bad is that it always branches you out from your main/master branch and I might have missed the config to branch from current branch, but that was not funny.

If on each worktree creation you need to run bundle, you can configure this in Conductor which is a nice feat.

[4/4] Code with LLMs in parallel

The diff viewer isn't useful for me and I don't feel comfortable with the Claude Code wrapper. Looks like it's a wrapper around the API, so commands like /mcp are not available. I need the full Claude Code power house so it's a clear "no no no".

TUI tool: Claude Squad

[4/4] Code with LLMs in parallel

I think Claude Squad is a bit buggy, see the error at the bottom. :D

What it does better is branching out from your current branch and putting the worktrees somewhere outside your repo plus having a stable naming convention for your branches (Conductor has some weird magic there, so it automatically renames them which messes up the git banch view for me - I end up with some branch soup).

Another nice thing here is that creating new worktrees is fairly intuitive and you stay in the original Claude Code TUI after you've switched to a new tree.

It's almost the ready-to-go solution, except you can't --dangerously-skip-permissions.

[4/4] Code with LLMs in parallel
:sadpanda: 2

I also haven't found much to configure inside, so there's also no option to "always do xyz when creating a new worktree".

So, a no-go as well.

Custom Scripts

This video might have the solution to all of it:

But vibing seemed easier than looking for that repo and trying to adapt it to my setup, so I began to work on a Ruby script that fits my needs:

[6/4] git worktrees with parallel agents in practice
This is not a comprehensive guide to Git worktrees, but I wanted to share how I’m currently using them to help you work more effectively. I also want to have a snapshot to compare against in a year, when multi-agent workflows will become more critical. If you want to know
[4/4] Code with LLMs in parallel

In a nutshell, managing worktrees yourself is challenging and the tools aren't there yet, but I'm sure we are close and this will be one of the driving fun factors in the coming months.


🤖
Fun fact: Did you know that the word robot comes from a Czech author in a dystopian book about human worker clones, who were grown to work? He called them "roboty".
Robots, Before They Were Robots
Before Hollywood, before Boston Dynamics, robots were about labor, not lasers.
[4/4] Code with LLMs in parallel
]]>
<![CDATA[[3/4] Code with LLM teams]]>Your LLM agent will launch subagents to perform the tasks you give it in teams.

Remember in [1/4] Code with LLMs and a PLAN I talked about how I tweaked an effectively working prompt to skip the step of asking the LLM to first add what it will do

]]>
https://richstone.io/3-4-code-with-llm-teams/6888e7c72a0cad0001892872Sun, 03 Aug 2025 22:13:54 GMT

Your LLM agent will launch subagents to perform the tasks you give it in teams.

Remember in [1/4] Code with LLMs and a PLAN I talked about how I tweaked an effectively working prompt to skip the step of asking the LLM to first add what it will do to the existing PLAN.md? And in [2/4] Code with LLMs and default instructions how does CLAUDE.md help your agent get the right setup for each conversation? Subagents execute the tasks you assign them in a well-prepared manner, well-equipped for each step with the proper context.

The Claude On Rails gem automates the setup of these subagents by creating an .md file for each subagent specialist tailored to Rails. For example, if views need to be made during the implementation step, a views.md file will be used to feed the subagent that will work on it. Similarly, for tests, there will be a test.md and so on.

[3/4] Code with LLM teams

I'm still experimenting with this in the Funnels on Rails app, so I don't have grand stories from the real world to share about how much this improves the quality of the output. What I have today is a fair warning that the "architect" from above is not a replacement for a phenomenal PLAN.md. As a developer, you are still the "product manager" who will need to specify a great and fine-tuned PLAN.mds together with Claude, so that all the different subagents in the implementation step have a foundation to work from. Basically, I don't yet think the example in the Claude on Rails gem should be taken at face value in any way:

[3/4] Code with LLM teams

In my experience, the architect agent, the "main agent" who coordinates the subagents, should have as clear and detailed instructions as you would have for your regular LLM conversation.

Rails builders 2 🛠️
[3/4] Code with LLM teams

Next week is the last chance to secure an open spot in Rails Builders 2 (free), which runs until the end of August. After that, we'll finalize the groups in the existing formations. If you know me, just connect wherever we usually chat. If we haven't spoken yet, press the button. 👇

Get to know call ☎️

]]>
<![CDATA[[2/4] Code with LLMs and default instructions]]>This will be a quick one, and if you get this one right, your coding buddies will run more smoothly on average because they will get stuck less and have clearer directions for each conversation session.

I guess for the Cursor and Co. people, it will be something like the

]]>
https://richstone.io/2-4-code-with-llms-and-default-instructions/688e63ab2a0cad0001892db5Sat, 02 Aug 2025 20:57:14 GMT

This will be a quick one, and if you get this one right, your coding buddies will run more smoothly on average because they will get stuck less and have clearer directions for each conversation session.

I guess for the Cursor and Co. people, it will be something like the cursor/rules file. For the state-of-the-art models like Codex, Claude Code, and Amp you'll have something like a CLAUDE.md file. This piece of information is typically ingested as context on the start-up of the conversation.

This is where you give the LLM a basic overview of your codebase and project. When you set up Claude Code for a project, there is an /init option that makes Claude analyze your project and fill the CLAUDE.md with some machine-useful context. You have to edit and fine-tune it with your own instructions about code architecture, style, and workflows. You can imagine "workflows" as instructions on how to deliver the implementation.

Some doubt the effectiveness of the file, reporting that the LLMs ignore instructions. I personally don't have high expectations of an LLM to deliver exact solutions, and following all the rules, it's just not how it works right now. I'm just happy that it delivers better on average, which I see because it stops getting stuck once I formalize a few things.

The Funnels on Rails app is a relatively new app and has a fairly small instructions file. Obviously, I can't tell how much worse the LLM would perform without the file. What I definitely know is that it did use Bullet Train's super scaffolding functionality for new functionality a couple of times, which I'm pretty sure it wouldn't have done without that context:

[...]
**Super Scaffolding**:

Bullet Train's code generation engine that creates complete CRUD interfaces. Key commands:

```bash
# Generate new model with CRUD interface
rails generate super_scaffold ModelName Team field:field_type

# Add field to existing model
rails generate super_scaffold:field ModelName field:field_type

# Available field types: text_field, trix_editor, buttons, super_select, image and more
```

Use the docs to get detailed info about how to scaffold:
https://bullettrain.co/docs/super-scaffolding

Example:
[...]

Teaching CLAUDE.md about Bullet Train.

I've tweaked those instructions a bit, but it ultimately comes down to providing the LLM with the necessary context at the right time. It's no good to give it redundant context, as in this case, the LLM doesn't really need to know about super scaffolding on each startup. So that particular CLAUDE.md will need many future iterations. And what I think actually needs to happen here is that the Bullet Train docs need to be accessed by the LLM at the right time for the right info via some BT docs MCP server in the future, like the Rails MCP Server.

What I really enjoy seeing Claude Code working with are the "workflow" type of instructions. Because Claude Code is integrated within your terminal, it executes system commands seamlessly. Once it's done coding, it will usually run tests. But not all tests are run equally:

## Testing

- You MUST ALWAYS run tests using `npm test -- --run` or `npx vitest --run` to avoid watch mode!
- Never use `npm test` without the `--run` to avoid watch mode.
- Always run npx tsc --noEmit and make sure there are no type warnings and error.

You can see, I put special emphasis on some instructions out of despair. 😅

For a large codebase, you probably want to instruct it to just run the individual files or test cases as a default and describe your additional setup:

## Development Workflow

### Testing

- When switching to a new branch, run `make dev-sync` and a test to seed the test db initially: `bin/test --notify-sound --no-retry --no-precompile test/controllers/api/v2/contacts_controller_test.rb:124` (after this you need to run tests in the same format with the `--skip-seeds` option)
- To run tests always use this format: `bin/test --notify-sound --no-retry --no-precompile test/controllers/api/v2/contacts_controller_test.rb:124 --skip-seeds`. If you run into a NoSeedsError, it means the test DB isn't seeded as expected and you need to rerun tests without the --skip-seeds option.
- For big test files, prefer specifying the line when running tests.

### Formatting

- After all implementation is done, you MUST ALWAYS run standardrb over all previously modified Ruby files to ensure correct code formatting with a command like this: `bundle exec standardrb --fix app/services/pages_converter.rb`

That's all I have on your LLM setup instruction files so far. Just add directions to your instructions files if you see the LLMs repeatedly getting stuck and have fun being more productive with "AI coding".


Rails builders 2 🛠️
[2/4] Code with LLMs and default instructions

Next week is the last chance to secure an open spot in Rails Builders 2 (free), which runs until the end of August. After that, we'll finalize the groups in the existing formations. If you know me, just give me a ping wherever we are connected. If we haven't spoken yet, press the button. 👇

Get to know call ☎️
]]>
<![CDATA[[1/4] Code with LLMs and a PLAN]]>I'm fortunate enough to work every day with the state-of-the-art coding agents, most recently Claude Code, and I've also explored various resources from developers who push the boundaries and produce significant work with top-tier agents. We all need to level up together in this wild west

]]>
https://richstone.io/1-4-code-with-llms-and-a-plan/688bd88a2a0cad0001892887Thu, 31 Jul 2025 23:25:42 GMT

I'm fortunate enough to work every day with the state-of-the-art coding agents, most recently Claude Code, and I've also explored various resources from developers who push the boundaries and produce significant work with top-tier agents. We all need to level up together in this wild west of AI coding and share the good, the bad, and the ugly along the way. So, I'll be flooding everyone's inboxes with four articles on my most recent experiences and learnings in the space over the next few days.

Your very fundamental first step to get right when coding with LLM agents is your process. It's actual teamwork. On the highest level, it should look like this:

🤖
1. Make the LLM agent create a PLAN.md based on your detailed requirements. Ensure that you provide a clear definition of 'done' and outline how the agent can verify the implementation independently without requiring your intervention (e.g., automated tests, scripts).

2. Fix the PLAN.md, remove redundancies, add missing pieces.

3. EDIT: (optional) Iterate with the LLM on the different steps in the PLAN.md (see step 6. in Nate's X post).

4. /clear context and commit your PLAN.md (the LLMs are trained on a reward function that sometimes leads to them fixing requirements instead of implementation, as a rogue way to make it work).

5. Ask the LLM agent to "implement the PLAN.md". If you are restarting development on existing files, you can load them into the context (like this). I usually repeat the definition of done and verification steps from step 1. as part of the prompt here.

6. Review - Refine. Maybe throw away and rewrite the PLAN with the LLM, repeat the other steps.

How do I know? Because when I first started working with agents, using Junie in RubyMine, and now Claude Code, I just provided them with tasks that included as much detail as I thought they would need. Once I started following the process after learning about it in the "Babysitting Coding Agents" Changelog episode, I felt the quality of the output increase massively, and the needed refinement of the prompts decreased.

One reason it works is probably that the PLAN.md is structured in a way that makes it easier for the LLM to implement. The other is that you refine the PLAN.md. In that process, you often improve the original requirements and become more specific about the implementation.

I have some thoughts on PLAN.md and coding with LLM agent squads/swarms, but that's for a future post.

Another interesting aspect is that several people in our Rails Builders groups reported using Gemini for planning because it can likely ingest your entire codebase or most of it, thanks to its enormous token limit. Others use Claude Desktop for planning, I guess, to save tokens if they don't already run on something like Claude Code Max (currently $100/mo).

Let's look at one example of how it worked out for me most recently.

Real world example with a twist

We have a massively fun project at ClickFunnels, transforming the best parts of our public API into an MCP server that LLMs can interact with. I can hardly fathom how much this approach saved me time. And I'm sure what I did would have taken someone really strong in LLM coding even fewer steps and less time to figure out, but here is how I went about it step by step.

We have about 150 endpoints in the public and a very large OpenAPI spec which Claude Code can't even ingest at once, it has to search inside and get the relevant parts piece by piece. We wanted to expand our initial spike of 7 MCP tools to approximately 100.

I gave it an early version of the prompt below:

get all the possible actions, response attributes, request
parameters, filters, validation constraints and descriptions from the
@cf-v2-openapi.json. Make sure the tool descriptions are populated with the
right info for the LLMs to work with. Just write a PLAN.md don't code anything yet.

I let it implement the first version of the MCP server tools with the result of that prompt, and noticed that the prompt needs to be more detailed. So, I discarded the current implementation and made the prompt more detailed. The second version of the prompt created a PLAN that looked something like this:

#### CRUD Orders::Tag
- [ ] `create_order_tag` - Create order tag
  - **Endpoint**: `POST /workspaces/{workspace_id}/orders/tags`
  - **Description**: Create a new order tag in the workspace for organizing and categorizing orders
  - **Parameters**: workspace_id (path)
  - **Request Body**:
    - name (string, required) - Name of the order tag
    - color (string, optional) - Tag color (hex code like "#FF5733")
  - **Response**: 201 Created with OrdersTag containing ALL fields:
    - id (integer) - Tag ID
    - public_id (string) - Tag public ID
    - workspace_id (integer) - Workspace ID
    - name (string) - Tag name
    - color (string) - Tag color
    - created_at (string) - Created at datetime
    - updated_at (string) - Updated at datetime

I first implemented that PLAN with a few API resources that we have at ClickFunnels, reviewing and testing the output.

And here's the twist: with the following resources that I implemented, after more refinements to my PLAN.md, the tools were working pretty well, and I stopped watching the process at some point. I let the LLM extend the PLAN.md based on the existing PLAN.md and the existing tested code, guided by the prompt I provided. Here's the final prompt that found its housing in our CLAUDE.md for that project:

[...]

## Adding new tools

Our OPENAPI_PLAN.md currently serves as a source for extending the server with new tools. A good way to do this is to use a prompt like this:

/clear context first

```prompt
We also need tools for [Namespace]::[Resource] and [OTHER] resources. Fix the
PLAN.md to have these resources and plan similarly to how you planned the other
resources, like Sales::Opportunities (OR OTHER RESOURCE EXAMPLE SIMILAR TO THE
NEW RESOURCE). get all the possible actions, response attributes, request
parameters, filters, validation constraints and descriptions from the
@cf-v2-openapi.json. Make sure the tool descriptions are populated with the
right info for the LLMs to work with. After you have planned and committed the
plan, implement the newly added plan for the mentioned resources. Test the new
implementation in the existing test files and run the tests with npm test --
--run and check for type issues with npx tsc --noEmit.
```

[...]

[Namespace]::[Resource] and [OTHER] are the only placeholders that I now change when adding a new set of tools through one or more API resources.

In a future post, I will discuss parallelizing the entire sorcery.

Rails builders 2 🛠️
[1/4] Code with LLMs and a PLAN

We push each other to build better together in the Rails Builders groups and there's rarely a session where we don't talk about AI and LLMs so...

This week is the last chance to secure an open spot in Rails Builders 2 (free), which runs until the end of August. After that, we'll finalize the groups in the existing formations. If you know me, just hit reply. If we haven't spoken yet, press the button. 👇

Get to know call ☎️

Resources

Check out these guys who share in detail their process of how they build and extend real databases with LLMs on the bleeding edge of AI coding:

And just a funny back-reference if you want to travel time and see how quickly everything changes in this new era:

  • My wildly outdated first baby steps documented in March 2025 and updated several times already here.

Happy coding and learning with LLMs! 🤖

]]>
<![CDATA[Asking good questions for your product development]]>Have you ever wanted to start a start-up, only to realize that you already run one? Or two?

As part of the Rails Builders group, I aimed to validate a new product idea for Rails shops that already have or want to implement a public API. My first baby step

]]>
https://richstone.io/asking-good-questions-for-your-product-development/68865cb42a0cad000189253eSun, 27 Jul 2025 20:47:40 GMT

Have you ever wanted to start a start-up, only to realize that you already run one? Or two?

As part of the Rails Builders group, I aimed to validate a new product idea for Rails shops that already have or want to implement a public API. My first baby step was to reach out to 20 people whom I can learn from about the idea.

But I couldn't. Primarily, I struggled with the topic I would be discussing during my customer conversations. It was challenging to come up with good questions and establish a red line for the upcoming conversation. The product was called unapi.dev, and its subheading contained all the directions it could go:

Asking good questions for your product development
The UnAPI team's bold promises.

So, I spent enough time thinking about it to start analyzing the big picture of this whole thing. A reliable product in this space is a serious thing; if shit goes sour, either companies or their customers will lose money. Anything I could build in that space that I can think of would cost an enormous amount of time and energy (and probably money). At the same time, I'm already fortunate to have a full-time contract, where I wear a variety of hats and lead the API team. Together, we accomplish all the tasks outlined in the subheading above and more. Basically, like a start-up. Not to speak of the start-up that is my ever-growing family.

To make UnAPI work, I think I would need to drop everything and reduce the time I spend on my other start-ups, which I decided against.

This means I decided to invest more into the things I currently enjoy doing and growing: the Funnels on Rails developer marketing tutorial and the Rails Builders community.

I'm happily conducting the Mom Test for these products with fellow developers, indie hackers, and founders. Though I don't think I need the ~50 conversations in that short amount of time, I previously thought I needed for UnAPI. I've already learned a great deal about my people in the last few months, recruiting folks for the Rails Builders groups and hustling through those weekly sessions with the guys. Still, I will continue to invest in it slowly but surely. I'm fairly certain that continuing to engage with developers and industry experts (marketing professionals?) is crucial to making the Funnels on Rails tutorial an exceptional one, and the Rails Builders groups a platform where people challenge themselves to achieve better results.

My most recent outreach was a flop. I sent an email broadcast with what I thought was a cool, to-the-point message to 20 of the original Funnels on Rails opt-ins, offering an informal chat about their project. No responses so far:

It's a tiny sample size, and I guess I expected everyone will jump on that "offer" because they already signed up for a developer-focused marketing tutorial thing. But it rarely comes as you expect in marketing.

Still, I continue to have conversations with developer folks who may be interested in getting their product into the world organically, and I appreciate how I'm applying a few of the things I've learned during our ongoing asynchronous book club, where I'm re-reading The Mom Test with others. Let me share three of them with you.

First, the level of zoom in your customer conversation is important. Understanding that will spare you and your talking partner energy and time. You will have different levels of knowledge about your dialog partner. For example, I recently caught up with a buddy of mine who could potentially be interested in the Rails Builders accountability group or the Funnels on Rails tutorial. During our conversation, I learned that he is happy with his current professional endeavors and has no time for side projects due to family commitments. As the author of The Mom Test says, "Not a customer - move on" (very loosely cited). However, it was still fun to have that conversation. There was no reason to dig deeper into my buddy's problems that Funnels on Rails and Rails Builders solve, because there was no problem to solve right now. Each conversation has a decision tree, and some conversations are totally fine to end at one of the nodes early.

The second thing I had fun "seeing" after reading The Mom Test is how I'm big on avoiding important and risky questions. However, to see that, you first need like 3 of them for the different conversation partners. Here's a set of questions that I feel good enough about that I think need answering during any of the convos:

  1. Why are you doing your side project? (How serious do they need to have this shipped? What is their main driver?)
  2. Do you need to hack with a community of fellow Builders? (Not all builders are ready to commit to building with other builders.)
  3. Do you want to learn more about marketing right now? (Or do they just want to have it built first?)
  4. Do they have a budget for any of the above?

Third, and no less important, moving to the next step. The conversation should conclude with a clear yes or no from your conversation partner. You need to ask for something of value, such as time, money, or reputation. An example would be asking to pay for a Rails Builders group membership, which I successfully did in the past. However, if my offer was rejected, I've never considered doing a "downsell", such as offering a free live tour through the Funnels on Rails tutorial or checking if they know someone else who could benefit from the paid group format right now (based on their reputation).

I hope some of it will help you shape your own questions and conversations with leads and customers to make your product great.

Rails builders 2 🛠️
Asking good questions for your product development

Next week is the last chance to secure an open spot in Rails Builders 2 (free), which runs until the end of August. After that, we'll finalize the groups in the existing formations. If you know me, just hit reply. If we haven't spoken yet, press the button. 👇

Get to know call ☎️
]]>
<![CDATA[The new AI wave, Rails Builders III and Mom Test reading group]]>Hey all!

Here are some mixed-bag updates with useful resources to help us reach the next level, as well as Rails Builders invites for building together.

Agentic Coding

I've been among the lucky ones in our development team to have gotten sponsored for a Claude Code Max plan

]]>
https://richstone.io/the-new-ai-wave-rails-builders-iii-and-mom-test-reading-group/685b8c33b02e100001c5bfe7Wed, 25 Jun 2025 12:13:53 GMT

Hey all!

Here are some mixed-bag updates with useful resources to help us reach the next level, as well as Rails Builders invites for building together.

Agentic Coding

I've been among the lucky ones in our development team to have gotten sponsored for a Claude Code Max plan by ClickFunnels. Huge thanks at this point, else I would have still been in the dark about where we are in the AI coding revolution.

For giant codebases, it's unlike anything I've tried to create, first ideas around solving a particular problem or finishing features according to specific instructions.

For greenfield projects, it's pretty spot on when generating new code and iterating on it.

It's also unmatched in parsing a gigantic OpenAPI schema, extracting all the paths and matching them with HTTP methods, descriptions, and then building a dynamic Ruby client SDK gem for it. Everything in a few hours, not days.

If you are not using state-of-the-art coding agents (Claude Code, Codex or Amp) yet, and you are not sure what this is all about, listen to this:

Adventures in babysitting coding agents — Changelog & Friends — Overcast
The new AI wave, Rails Builders III and Mom Test reading group

If you are riding the new agentic coding wave with me already and using Claude Code, I can recommend these further diggings to bump your technological superpowers:

Rails Builders III ⚔️⚔️⚔️

I had the last calls for the new Rails SaaS Builders group last week and the good news is that we had about 5-6 people generally interested in the group!

As a reminder, the plan was to help us build your app customer-first, following the Funnels on Rails approach:

  • Get your users and subscriptions through real sales funnels, so you think about your offer, market and ideal customer from day one, not after you've built your product (i.e., never).
  • Build it on steroids with THE king category open source Rails starter kit: Bullet Train.

The bottleneck turned out to be the sales funnel part. I've played through some open-source options for the customer-facing part of the whole thing, and I'm not super excited about any of those options.

So what I want to offer is a guided "build your SaaS customer-first" program:

👀 Week 1-2 - build your funnel; draw down first core ideas and screens of your app; prepare outreach phase;

🗺️ Month 1-2 - shape your offer and talk to 50 people from your niche "Mom Test style" to know what you gonna build; get first sign-ups for the waitlist; deploy your app; connect the app with ClickFunnels;

🛠️ Month 3-4 - build the thing; keep talking to customers; iterate;

👫 Structure: We meet ~2 times a week: Accountability session during the week; hands-on session on Saturday and/or Sunday; private Slack channel;

The program itself is free, but you will need a ClickFunnels subscription to join (we'll be working hard on the sales funnel part during the 14-day trial - or 30 if you follow their 1-funnel-away-challenge - where you can still decide whether it's for you or not).

We are 3 Rails Builders already excited about getting this rolling within the next couple of weeks. So let me know if you wanna join this one!

The Mom Test async book club

Pascal, a couple of other Rails Builder folks, and I are reading The Mom Test using the most fun async book club app, to ramp up on a framework for organizing and having useful customer conversations:

  • Don't even mention your idea as long as you can.
  • Deflect compliments on your idea.
  • Don't get into pitch mode.
  • Anchor fluffy comments with tangible facts.
  • Dig into ideas and feature requests.

If you want to learn about all that and deepen you knowledge, we will also have occasional sync book club sessions to discuss tactics and applications of the content.

As part of the Rails Builders accountability groups, I am also implementing the Mom Test with one of my product ideas. If you want to witness that process live and create alongside other savage Rails Builders, consider getting on the waitlist for Rails Builders I ⚔️ and Rails Builders II ⚔️ ⚔️ or join the new Rails Builders III ⚔️ ⚔️ ⚔️ as described above. We have one spot open for the Tuesday group and will pick someone from the waitlist this or next week. Just let me know what you want!

]]>
<![CDATA[Thoughtful side project pivoting]]>Do you ever switch from one project to another spontaneously? It often happens when something new comes along, and you snag the chance to leave this thing alone that you were excited about just recently but came to view as a burden as time passed. It has a sour aftertaste

]]>
https://richstone.io/thoughtful-side-project-pivoting/67f2da5d0b9b920001074d07Sun, 06 Apr 2025 23:25:40 GMTDo you ever switch from one project to another spontaneously? It often happens when something new comes along, and you snag the chance to leave this thing alone that you were excited about just recently but came to view as a burden as time passed. It has a sour aftertaste of having left things undone that probably still had potential.

I think what I have here is an example of a more thoughtful pivot that is not switching to something entirely new but continuing the core of the thing and switching the theme. At least, I don't feel the aftertaste, and I imagine you could benefit from my thought process to decide for your next pivot.

I started building socialgames.cc a few months ago for the main reason of wanting a showcase of running your SaaS on a marketing platform API, so you can develop marketing first and delegate the ugly chunk of subscription and user management work to a third party that had hordes of developers working on that functionality.

I didn't want it to be a dummy app, but something that I or other people can actually use, even if it's simple. I went for an offline game score tracker for a few reasons: I'm into games and "needed" this at home and with friends. It's simple to build a first version of it and serve its purpose of making the sample app "do something".

Given all that, I have been doing some product-market-fit work:

  • According to my notes, I talked to about 50 people in the "Mom Test style". (not the complete Mom Test, just folks I know or met, but in the Mom Test way, at least)
  • Created numerous threads in related communities to understand whether I am solving a real problem and what it is (I had many comments and learned a lot, still - recommended!).
  • Shipped some posts on socials.

Most responses were lukewarm at best, not really indicating a need. And my own need for it seemed to be gone. I haven't even made the time and budget to buy the one and only competitor app in the App Store for $5.99. And to be honest, it's hard to think through the communities and places where I could get leads and talk to real people. It's also hard to imagine I'd be excited to talk to random people or game nerds about a board game score tracker. As a counter-example, if I built an API- or Rails-focused product, I could easily have 200 valuable conversations with the right people in the next few weeks. I'm sure there are many ways I could pivot within the socialgames realm, but I don't think it makes sense in the grand scheme of things.

Also, at this point in the year, I planned to have already started the implementation of the sample SaaS integration with the Marketing Platform API part. So, I'll keep with the plan and start this now. Basically going in reverse now with this pivot:

  1. Spin up a new sample app.
  2. Build the dummy app and showcase how you can run your SaaS without putting much work into user and subscription management.
    1. In the meantime, do deep product research in the API-product space, do your Mom Tests, and get a feel for the market.

Having some kind of an API-focused Rails product would be the coolest thing I can build, given my professional focus in the past 7 years. As I gameplan, I should have a solid base if I review all the stellar products I used, read about recently in blogs and heard about on podcasts and see what I can apply to the Rails space.

This makes so much more sense. I probably only started with a game tracker initially because it was easy to begin with. Doing it in reverse now, I can still be building right away and working out a tangible idea that makes more sense from so many viewpoints on the way.

Writing this down, I think this is a good enough pivot. Hope you can gather enough data next time to make your pivot more thoughtful! Let's do some integration work now! 🔌

]]>
<![CDATA[DNS brain teaser for your engineering brain]]>I just spent a few weeks wrestling with a domain issue that could have been solved in a couple of hours max (not a literal few weeks, but half an hour here and an hour there over a few weeks). Let me walk you through what happened and the technical

]]>
https://richstone.io/dns-brain-teaser-for-your-engineering-brain/67e0207597b4970001cc793cSun, 23 Mar 2025 17:51:34 GMT

I just spent a few weeks wrestling with a domain issue that could have been solved in a couple of hours max (not a literal few weeks, but half an hour here and an hour there over a few weeks). Let me walk you through what happened and the technical details I learned along the way about DNS and domains.

For context, I was trying to connect a subdomain of my primary domain (richstone.io) to my ClickFunnels account. I already have this blog here that you can navigate to with richstone.io and I wanted devs.richstone.io to point to ClickFunnels, which I currently basically use like TypeForm and sometimes like a Stripe Checkout.

The setup required verifying certain TXT records, and this is where things got strange: some TXT records were verifying just fine, while others stubbornly refused to work.

In my Namecheap dashboard, I could see all the records clearly but ClickFunnels wasn't cool with them at all:

  • Google verification TXT records ✓
  • ClickFunnels verification records ❌
  • Some additional mail TXT records ❌
DNS brain teaser for your engineering brain

The user mindset combined with engineering pre-assumptions

Here's where I went wrong. Instead of just thinking like an engineer, I got stuck in what I call the "user mindset."

To my engineering brain, a TXT record was a TXT record and should always be visible when the DNS registrar is queried. But when it comes to DNS, nothing is as it seems. 🔮

So I kept being a user and tried to make the system right:

  1. Deleting and re-adding the same TXT records
  2. Double-checking my entries for typos
  3. Waiting longer for "propagation"
  4. Wondering if it's a ClickFunnels verification issue
  5. Talking to others about it (looking at you fellow "engineers"! 🫵)
  6. Removing and re-adding the subdomain
  7. Trying it with a new subdomain

I literally did this dance for weeks. I'm such a "user"! :/

The one command

Now that I was out of options, heavy weaponry was taken out. Luckily, I have access to the ClickFunnels codebase. So I started digging into the source code. After like 10 minutes of looking, it became clear to me that ClickFunnels is just looking at the public domain registrar that anyone can lookup. I knew immediately that salvation was near.

I've asked the LLM gods for a terminal command to look up DNS records and dug out dig TXT richstone.io

▶ dig TXT richstone.io

; <<>> DiG 9.10.6 <<>> TXT richstone.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10487
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;richstone.io.                  IN      TXT

;; ANSWER SECTION:
richstone.io.           1799    IN      TXT     "google-site-verification=FCyHG3GsXOhByyGS_uEpWNSFQyMOXQVIXrL9ujdrKeE"

;; Query time: 46 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sun Mar 23 17:49:10 CET 2025
;; MSG SIZE  rcvd: 252

The TXT records I tried to add weren't visible in the DNS registrar. So, I had enough material for a chat with the Namecheap support. Which was pretty stellar, by the way:

The Technical Culprit: CNAME Record Priority

When I finally connected with Namecheap support, they identified the issue. According to their support.

I had a CNAME record for my bare domain (using the @ symbol) pointing to the server where richstone.io is hosted. According to Namecheap CNAME records have the highest priority in the DNS hierarchy and will suppress other records for the same hostname.

So even though I was adding the TXT records correctly in Namecheap, they were essentially being overridden by the CNAME record at the public DNS level.

The support agent explained something fascinating about DNS priority:

"The CNAME has the highest priority and suppresses all other records (like TXT record, MX record etc) for the host @. That is why your TXT record is not working for the host @." - 🤯

I still only half-believe it, because he wasn't able to explain why then I was still seeing the google-verification TXT record. But whatever, the solution below worked.

The Solution: CNAME → ALIAS

Instead of using a CNAME record for the bare domain, I was told to use an ALIAS record:

ALIAS @ richstone.serveriorelwham.com

The key difference is that ALIAS records don't suppress other record types. As the support agent explained:

"The main difference between CNAME and ALIAS records is that the ALIAS record does not suppress A, MX, TXT, CAA records for the same host, unlike the CNAME record."

Maybe this is specific to Namecheap, maybe not. But right after doing that, the weeks of userness were gone:

▶ dig TXT richstone.io

; <<>> DiG 9.10.6 <<>> TXT richstone.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26277
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;richstone.io.                  IN      TXT

;; ANSWER SECTION:
richstone.io.           1799    IN      TXT     "clickfunnels-domain-verification=jWMWXZ"
richstone.io.           1799    IN      TXT     "google-site-verification=FCyHG3GsXOhByyGS_uEpWNSFQyMOXQVIXrL9ujdrKeE"
richstone.io.           1799    IN      TXT     "v=spf1 include:mailgun.org include:mailer.myclickfunnels.com ~all"

;; Query time: 40 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sun Mar 23 18:21:13 CET 2025
;; MSG SIZE  rcvd: 252

Technical lessons for fellow domain managers

  1. Understand DNS record hierarchy - CNAME records at the root domain suppress all other records (apparently).
  2. Use ALIAS records for root domains when you need other record types to work simultaneously.
  3. Always verify public DNS status with tools like dig rather than trusting what you see in your registrar dashboard (!)

All the fish

The next time you're facing a technical issue that seems inexplicably difficult, try running some diagnostic commands first. Five minutes of engineering thinking can save weeks of user-level frustration.

P.S.: I love "users", I'd just prefer not be one when I could have avoided that.

]]>
<![CDATA[Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres]]>Finally, after years of Heroku, Render and fly.io, you make the switch to self-hosting. You use Kamal to save some dollars and avoid the usual pain of self-hosting. You hit good timing because Kamal has matured.

Here's a fully working and continuously supported setup of Rails with

]]>
https://richstone.io/rails-8-on-kamal-example-setup-with-sidekiq-redis-and-postgres/67cda9bc97b4970001cc72f9Sat, 15 Mar 2025 17:22:39 GMT

Finally, after years of Heroku, Render and fly.io, you make the switch to self-hosting. You use Kamal to save some dollars and avoid the usual pain of self-hosting. You hit good timing because Kamal has matured.

Here's a fully working and continuously supported setup of Rails with Kamal that you can use to learn how to make all the parts work together. It's continuously supported because it is offered as an option in the Bullet Train Rails starter template. This Kamal setup also powers the (upcoming) related open source app.socialgames.cc SaaS.

In this guide, I'll show you two ways of setting up Kamal:

  1. For a completely new Bullet Train app.
  2. For an existing (Bullet Train) app.

But you should be able to apply the concepts to your non-Bullet Train app and use the PR code as an example:

If you are not familiar with the Bullet Train starter template. It's basically Rails on steroids that gives you everything you need for your SaaS app development, so you spend less time reinventing the wheel and more time hacking out your core features.

Prerequisites

To follow this guide, you will need:

  • A server that you can SSH into. I've used Hetzner, see the disclaimer at the end of this section.
  • A domain that you own that points to your server. For example, if you will be using a subdomain for your app, you can just create an A Record in your domain registrar:
TYPE: A Record
Host: bubba
# Your IP from your SSHable server:
Value: 999.999.999.999
TTL: Automatic

For a domain like socialgames.cc, you will be able to hit your Kamal server with bubba.socialgames.cc.

  • Ruby and Docker installed locally.
  • Kamal installed locally:
> gem install kamal
Fetching net-scp-4.1.0.gem
...
8 gems installed

> cd code/github/app
app> kamal init
Created configuration file in config/deploy.yml
Created .kamal/secrets file
Created sample hooks in .kamal/hooks

  • You need Rails credentials to have been created via EDITOR="code --wait" bin/rails credentials:edit --environment production locally.
🔏
A note on your remote production server: To get started, it would be as easy as setting up a Hetzner server. There are 1-2 small decisions you need to make during purchase/setup, but you don't need anything else other than being able to do $ ssh root@999.999.999.999 from your local machine to login. However, you might want to harden your server afterward if you want to keep your app, your data and your users safe. I heard great things about the Kamal handbook where you can learn all about it, including a slick server setup script.

New Bullet Train apps

You are starting a new adventure!

You'll probably want to go through the app setup here and see your app running locally first, which includes stuff like cloning the repo, running bin/configure and running bin/setup. bin/configure will initiate some relevant name changes for the deploy.yml and secrets files.

GitHub - bullet-train-co/bullet_train: The Open Source Ruby on Rails SaaS Template
The Open Source Ruby on Rails SaaS Template. Contribute to bullet-train-co/bullet_train development by creating an account on GitHub.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

The changes you will need to do manually at this point are here:

(the amount of manual changes needed might change in future PRs)

All the files that need changes:

Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

Changes in config/deploy.yml

Basically, fixing all the FIXMEs:

Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

Changes in .kamal/secrets

The below setup assumes that all the env vars are exported on your local machine, for example in your .zshrc or .bashrc files. A more recommended setup is to do this via kamal secrets extract and a secrets vault.

Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

Changes in `config.database.yml

Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

Deployment

Once you are done with the changes, you can run:

app> kamal setup
▶ kamal setup          
  INFO [c1d95a7c] Running /usr/bin/env mkdir -p .kamal on 999.999.999.999

... this can take a few minutes...
... but then it should be "Finished" 👇

Releasing the deploy lock...
  Finished all in 128.0 seconds

Existing Rails projects

At this point, this section is even more rough than the previous one.

For an existing Rails project you may want to start by:

> cd app
app> kamal init

Then make changes as needed as shown in this PR:

(if you are using Bullet Train and the PR was already merged, you could probably also update your app to the latest Bullet Train version if that's an option for you)

(you don't need all the Bullet Train-related changes, focus on the newly created files from kamal init, Dockerfile, cable.yml, database.yml,

Also, make the "manual changes", e.g. in the deploy.yml, as described in the previous setup section for new Bullet Train apps.

Lastly, run kamal setup to get things running.

Hosting multiple apps

Here's an example of how I added Kamal deployment to the freshly started Funnels on Rails "Existing Rails project" (as shown in previous section's instructions) and deployed it with Kamal on the same server alongside another app:

Deploy with Kamal · RichStone/funnels-on-rails@46cae7e
Contribute to RichStone/funnels-on-rails development by creating an account on GitHub.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

The most important things to keep in mind here:

  • I am using the same accessories from the first app that I deployed (Postgres DB and Redis). In this case, the accessories were defined in the deploy.yml of the first app.
  • Same as with Postgres (which is handled by your database.yml, you need to configure a different Redis DB within your single Redis instance (a Redis instance can hold up to 16 databases).
  • This is not a highly scalable setup.

References:

Issue Deploying Two Rails Applications with Shared Postgres Accessory · basecamp kamal · Discussion #1178
I have two separate Rails applications setup up to deploy with Kamal 2 to the same server, both using Postgres. Application 1 deploy.yml # Name of your application. Used to uniquely configure conta…
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres
Multiple Sidekiqs using a single Redis database
by u/stpaquet in rails

Making it past sign up

After you have things up and running, you might attempt to sign up. This will fail if you don't have a mail server configured yet because sending the sign up email will error.

You will need to:

  • Uncomment the Postmark gem for production:
group :production do
  # We suggest using Postmark for email deliverability.
  # gem "postmark-rails"
  • Create a Postmark account.
  • Create a new sending server.
  • Get the server API key.
  • Add it to your local secrets.
  • Replace support_email: change.me@localhost in the application.en.yml with the email address you signed up for Postmark (in case you are not approved, you can only use that sender as your From-email address and send to your allow-listed email addresses, like the-address@i-signed-up-with-in-postmark.com).
  • You also need to fix your production.rb mailer host here: config.action_mailer.default_url_options = { host: "example.com" }

Setting up CI

Check out Lars Corneliussen's GitHub Actions CI setup in this GitHub Gist.

Running a second app on the same server

This might be as simple as reusing the same setup but changing the Postgres and Redis ports as well as possibly some parts of the Dockerfile. According to this article, this is how it would work for Postgres and I'm also adding my assumption of how this would work for Redis:

# deploy.yml
env:
  clear:
    POSTGRES_DB: socialgames_cc_production
    DB_PORT: 5432 # <-- Internal port - stays the same.
    REDIS_HOST: socialgames-cc-redis
    REDIS_PORT: 6371 # <-- Changed to a different port.
...

accessories:
  db:
    image: postgres:14
    host: 999.999.999.999
    port: "127.0.0.1:5433:5432" # <-- Only change the exposed localhost port.
...
  redis:
    image: redis:7.0
    port: 6371 # <-- Changed to the same port as we expose in the env vars.

For the Dockerfile, make sure you don't have anything conflicting in case you operate on the host machine.

Gotchas and TODOs

  • The Dockerfile can be improved, to rather resemble this.
  • We might want to provide a BT-specific server setup/hardening script.

Cool people who helped and resources I used

Looks like it takes a village to raise a Kamal app, I've booked about 25 Pomodoro-like focus time. Which in a person's life who allocates about 1-2 hours per day to side projects, it's 3 weeks. Gotta be careful with your side quests! But this one was really needed for me to host my upcoming Open Source BT Sample SaaS app, so it's time well-spent.

I've also posted about this on socials and got support from incredible people.

Big thanks go to:

  • Josef (helped reviewing the earliest version! 💪)
  • Justin Marsh (helped a lot bouncing thoughts)
  • Lars Corneliussen (shared his working and well-looking setup)
  • swombat (Shared his Bullet Train Dockerfile)
  • Zack Gilbert (Sharing his experience)

Sorry, because I probably forgot someone.

In terms of articles and examples, I started with the Kamal official docs:

Deploy web apps anywhere
From bare metal to cloud VMs using Docker, deploy web apps anywhere with zero downtime.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

I bumped early into these articles and learned quite a bit from them:

Hiccups While Putting a Rails 8 App in Production w/ Kamal 2 - Zack Gilbert
Pointing out a few of the hiccups I ran into while putting a simple Rails 8 app in production with Kamal 2: Choosing and setting up a server & getting the Docker registry working.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres
A detailed look at running Kamal setup/deploy
Now that we have our Sinatra app up and running I want to do a quick run-through to talk about the simplicity of Kamal and how it’s really just a nice layer on top of some great open-source tools like Docker and Traefik. Whenever running a Kamal command it
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres
Kamal’s missing tutorial – how to deploy a Rails 8 app with Postgres
Rails 8 is out. And with it Kamal 2, the new default way of deploying Rails apps. But Kamal is hard for the uninitiated. This is a complete tutorial on how to get a Rails app fully in production using Kamal.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres

I also used other people's setups for inspiration:

BT docker setup
BT docker setup. GitHub Gist: instantly share code, notes, and snippets.
Rails 8 on Kamal example setup with Sidekiq, Redis and Postgres
]]>
<![CDATA[How I use AI coding tools as a Rails dev]]>
🗞️
TL;DR update June 25th 2025
Agents are here, i.e. in my IDE. RubyMine's Junie and GitHub's Copilot deliver OK results, but are pretty slow.

Claude Code (on the Max subscription) has conquered my terminal and killed those other agents for good.

Still
]]>
https://richstone.io/how-i-use-ai-coding-tools-as-a-rails-dev/67cc5be497b4970001cc71e3Sun, 09 Mar 2025 16:31:22 GMT
🗞️
TL;DR update June 25th 2025
Agents are here, i.e. in my IDE. RubyMine's Junie and GitHub's Copilot deliver OK results, but are pretty slow.

Claude Code (on the Max subscription) has conquered my terminal and killed those other agents for good.

Still sometimes using GitHub Copilot for quick questions and some inline IDE stuff because for some reason GitHub thinks I'm an open source maintainer and gives me Copilot for free.
How I use AI coding tools as a Rails dev

It surprises me sometimes that a lot of devs are still not very bullish on coding with an LLM, so here is a little summary from a Rails dev in the middle of the usage spectrum about what I'm using the LLMs for and what my current toolset is. I will also add some general tips and warnings at the end of the post about prompting and taking advice from LLMs vs humans.

A year ago, I made the funny mistake of buying GitHub Copilot for a year because it seemed like the best solution for JetBrains at the time. It's "funny" and a mistake because the landscape changes so fast that I think experimentation is essential for the years to come. Let me chat for a bit about all the tools I use so I can laugh about it again in a year or so.

My best prediction is that the whole midterm future of the "AI coding thing" is us collaborating with the IDE/editor about:

  1. Architect a solution.
  2. What code to write.
  3. What code to re-write.

All of it with minimal switching and copy-pasta between the place you write your LLM instructions and the code. I experienced this first hand on small greenfield projects with Rails and Python in Cursor, so I'm pretty sure Cursor is on the bleeding edge of the whole story.

Now, unfortunately, I can't make myself code in anything other than RubyMine so far when it comes to Rails. I'm trying to move to Cursor occasionally but keep switching back to it because it has everything I need for coding, debugging, and committing code out of the box. VSCode and Ruby LSP just don't cut it for me when I set things up manually, and I haven't seen grand automated setups yet. So, I will give you a breakdown of what I'm using LLMs for, what I tried, and what works well for me in RubyMine. Something similar might apply to you if you don't use Cursor and co; you might learn what you can look out for.

I use LLMs primarily for:

(in about this order of frequency)

  1. Debugging a specific error, explaining unclear/foreign language code or asking for a syntax bit.
  2. Autocomplete.
  3. Ask it to write specific code and ask it to write tests for the code (depending on the setup, in this or reverse order).
  4. Ask it general questions about the thing I'm working on. I could be curious about "why is it called normalizing a database schema?" or "who came up with the i_suck_and_my_tests_are_order_dependent
  5. in minitest?"
  6. Formatting - like "rewrite this code in this format" or "create a markdown table for this list".
  7. Shaping and architecting the solution (less so for big professional projects, more so for greenfield projects).
  8. Asking it to write my code when my physical and mental resources are depleted, but I still wanted to ship that last be late in the evening. It's amazing how much more you can go nowadays, backed by LLMs. :D

LLMs don't have all the answers yet. So I'm probably using 90% LLMs - 10% Google nowadays.

Currently trying: Supermaven chat

I have been exclusively using the Supermaven autocomplete for over a year now. It's practical, fast and free.

The chat has some UX issues, but other than that, it works fine. I think it has some kind of a credit system. After 5 days, I'm halfway through my credits. I guess I'll need to pay more in the end if I want to keep using it. I don't see any greatness in reducing copy-pasta friction, and how the shortcuts work makes no sense to me. I think GitHub Copilot does a better job overall.

The UI is quite ugly, but that's normal for the JetBrains plugins. It sometimes does a better job formatting things than the GitHub chat.

How I use AI coding tools as a Rails dev

Used for a year: GitHub Copilot

The autocomplete is very slow and of lower quality. As mentioned above, even when paying for GitHub Copilot, I was using Supermaven for the autocomplete. I'm pretty sure it's the way to go.

GitHub Copilot chat has good results, has some context awareness and seems to get new features frequently. It also has some inline code editing features and shortcuts that work, but it is nothing Cursor-like when it comes to reducing copy-pasta friction. Though, I see their features turning long-term into something like this if they keep developing it.

JetBrains AI Assistant

I'm secretly hoping that JetBrains' "AI Assistant" will become the world's number one AI tool in the future. But currently, out of all the tools I've played so far, it's first from behind.

  • If you ask it to solve a problem, it's barely context-aware, gives you wrong advice and gets into loops quickly.
  • Has slow auto-complete.

I think JetBrains gives you a short trial every time they ship a new major update to RubyMine and co. I usually try it for a day and get rid of it.

Claude

I sometimes use claude.ai for writing and visualizing a draft of something, like if I'm under time pressure to submit the RailsConf CFP in the next 1-2 hours. You can give Claude the context of your writing so you get results where you can actually use some snippets for your project.

How I use AI coding tools as a Rails dev

General tips

How do you avoid destroying your code, hardware, and life by using LLMs?

Running suggested LLM commands

You will develop an instinct for when to ask the search engines vs. when to ask the LLMs. And when to ask the search engines if you asked the LLM first.

I had this warning when doing source .zshrc that I wanted to fix:

▶ source ~/.zshrc
compinit:527: no such file or directory: /opt/homebrew/share/zsh/site-functions/_brew_services

I reached for the LLM first:

How I use AI coding tools as a Rails dev

When I saw the LLM answer, I was like, "I need to consult this with some humans who actually ran into this" and I knew I'm probably (hopefully) one search away from this. And there it was, a totally human response with a 100 👍.

How I use AI coding tools as a Rails dev
compinit:527: no such file or directory: /usr/local/share/zsh/site-functions/_docker_compose · Issue #12002 · ohmyzsh/ohmyzsh
Describe the bug When I open zsh on vs code terminal, the warning compinit:527: no such file or directory: /usr/local/share/zsh/site-functions/_docker_compose appear on it. I saw an issue is the sa…
How I use AI coding tools as a Rails dev

Unfortunately for the human race, many (early career) devs won't have that instinct yet and might run commands blindly, messing up their machines or doing worse. I know because I ran StackOverflow commands blindly in the past and needed to reach for the Time Machine backup. And actually, I might still run a command without understanding it a 100%, but I look for different sources and proof. LLMs can sound even more convincing, though. It's a dangerous time. :D

Conclusion

A former colleague of mine - let's call him David (that's actually his name; he's great) - once said on a public call last year that AI was not groundbreaking for him in delivering technical solutions. I found it very interesting because, to me, it feels like LLMs have fundamentally changed my behavior while creating technical solutions.

Later, David told me that he actually uses AI quite a bit for finding information during coding, but it hasn't necessarily changed how he creates the final solution. Finding information is already a lot of what we do as devs, so it's funny how quickly we get used to a new technology like this and don't consider it groundbreaking. I can't measure the impact, but I feel that LLMs made my process quicker and easier.

I'm pretty sure that you should use a tool like Cursor in the mid-term and hack it out together with it. I should be using it too, already. 😅

But if you are a RubyMine degenerate like me, you will want a chat tool inside of your IDE so as not to switch between windows and at least have some bits of copy-pasta reduction and context awareness right in your pair programming AI partner chat box. The best tool for this at the moment is probably GitHub Copilot, but I will keep testing new subscriptions in the following months and let you know about the progress.

]]>