I Didn't Start With a Plan — I Started With a Question
I didn’t set out to become a gardener. I set out to get better at using AI. A low-stakes raised bed experiment turned into a lesson about feedback loops, staying in the conversation, and why squirrels are basically production bugs.

I didn’t set out to become a gardener.
I set out to get better at using AI.
That’s the actual origin story here. I wanted to practice using AI as a tool — not for work, not for anything high-stakes — just to understand how it actually behaves when you commit to using it for something real over time.
So I looked around the house for a project. Something low-stakes. Something that wouldn’t matter if I messed it up. Something I’d probably mess up.
I found this.
An old raised planter on the patio. Half-ignored. Not in great shape. Definitely not a “project.”
Which made it perfect.
Starting With a Picture, Not a Plan
I didn’t Google “best raised bed vegetables.” I didn’t buy a book.
I took a photo and asked one question:
“What should I do with this thing?”
The first real decision that came out of that conversation: start with seeds, not transplants. Given the bed size, the time of year, and the fact that I didn’t want to spend much money on what was still an experiment — seeds made more sense. They’re cheap. Slower, yes, and they require more patience. But for a project where I wasn’t sure I’d stick with it, buying $4 seed packets beat buying $8 per plant at the garden center.
That framing — this is an experiment, not a commitment — turned out to be the right one.
AI Picked the Plants
Based on my zone, the season, and the planter setup, the suggestions were: lettuce, spinach, arugula, carrots, cilantro, and chives. Cold-tolerant, relatively fast, easy to start from seed.

I followed most of that advice. I planted the seeds, set up the drip lines that were already there, and waited.
“Waiting” is where the discipline broke down a little. I overwatered early on. I didn’t space everything perfectly. I kept checking for sprouts every day like that would somehow speed things up.
At no point would anyone have looked at what I was doing and said, “yeah, that’s an experienced gardener.”
Setting Expectations Early (and Actually Listening)
One thing that helped: AI was upfront about timing from the start.
Carrots take a while. Greens come in faster. Radishes are actually quick. Don’t plant everything and expect it all to mature at once — stagger your attention the same way the garden staggers its output.
I mostly heard this. I didn’t always act on it. But knowing what to expect kept me from panicking when half the bed looked like it was doing nothing for two weeks. It probably was doing something. Underground.
Knowing the expected timeline changes how you interpret “nothing is happening.” Half of staying patient is just having calibrated expectations upfront.
Then Things Got Complicated
What AI couldn’t plan for: a late freeze that killed off the first round of germination in one section of the bed.
I hadn’t asked about frost protection. I didn’t have row cover. So I lost some of what was planted and had to replant that section from scratch.

Not a disaster. Just a setback. And honestly, a useful one — it forced me to ask better questions the next time around.
The Squirrel Problem (Which Is Really a Debugging Problem)
At some point, seeds were clearly disappearing before they could germinate. Something was digging.
I asked for suggestions. Got a few options: chicken wire cover, cayenne around the perimeter, repellent spray. I tried the cayenne.
It helped. Mostly. The squirrel came back anyway for a while.
And here’s the thing: this felt exactly like debugging a weird problem at work. Something is clearly wrong. You didn’t cause it. The usual fixes don’t fully stick. It doesn’t respond to logic. You try something, it improves, the problem comes back in a slightly different form.
You don’t solve the squirrel problem once. You manage it until it stops being the most urgent thing. Then you move on.
If you’ve spent any time debugging production issues, this is a familiar feeling.
The Feedback Loop
The thing that actually worked — in the garden and as an AI experiment — wasn’t following instructions perfectly. It was the loop:
Do something → show a photo → get feedback → adjust → repeat
I’d share pictures of whatever was happening: uneven rows, suspicious sprouts, soil that was clearly too wet, plants I wasn’t sure I should keep or pull. And I’d get back grounded, specific reactions.
- That’s a carrot — don’t pull it.
- Dial back watering here.
- These rows will be ready sooner than you think.
- That one’s a weed.
It felt less like searching for answers and more like working with someone who could see what I was seeing.
That distinction — between looking things up and staying in a conversation — turned out to be the whole point of the experiment.
Small Wins (Which Are Still Wins)

Despite the freeze. Despite the squirrel. Despite imperfect spacing and too much water in week two.
I actually pulled radishes out of the first bed and ate them. A little peppery, a little uneven in size, completely real.
I got broccoli out of the second bed — the one I started later in the season with transplants instead of seeds, because by that point I’d already proven the concept and it felt worth spending a few dollars on plants.
And right now, the newer bed has lettuce coming in along the edges. I’m not harvesting yet, but it’s close. That’s not nothing.
None of this is impressive by gardening standards. But that’s kind of the point. The bar was: does anything actually grow? And the answer was yes.
What Changed Wasn’t the Garden — It Was the Process
I didn’t suddenly become disciplined. I didn’t follow a strict watering schedule. I didn’t measure spacing precisely or optimize anything.
What I did:
- Paid a little more attention
- Made small adjustments based on feedback
- Kept going even when sections weren’t working
And somehow, that was enough.
Progress came from feedback loops, not better upfront planning. Which is, I think, the actual lesson — both for gardening and for learning to use AI.
The Actual Experiment
I wanted to understand how AI behaves when you use it over time for something real. Here’s what I found:
It’s better as a collaborator than a search engine. Not because it’s smarter than Google — but because context accumulates. Each question builds on what I’d already shown it. By month two, I wasn’t explaining what bed I had or what I’d planted. We were already past that.
It’s honest about tradeoffs. When I asked about seeds vs. transplants, it didn’t just say “seeds are fine.” It explained what I’d be giving up (speed, reliability) and why that tradeoff made sense for an experiment where I wasn’t sure I’d follow through.
It’s most useful when you show your work. A photo of something wrong in the garden got better feedback than any written description I could have typed. The more concrete the input, the more useful the output.
You don’t need to be great at something to make progress with AI. You just need to stay in the conversation long enough for things to develop.
I started with a random planter and a question. I ended up with actual food and a process I’ll use again.
Not bad for an experiment I wasn’t sure I’d finish.
More from Technology & AI

AI Isn't Taking Jobs — It's Removing Tasks and Raising Expectations
Why the most important AI shift happening inside organizations right now isn’t replacement — it’s role evolution

How I Cut 70% of Tokens from My Claude Skill
Moving business logic out of the prompt and into a deterministic script changed everything about cost, consistency, and speed.