Earlier this year I made the decision to go beyond the surface of understanding AI.
I’ve used AI tools in my workflows for a while, but I wanted a deeper understanding of how it works under the hood.
I took DeepLearning.AI’s RAG course, and finishing it really shifted the way that I think about AI. And, honestly, very excited by the capabilities AI affords.
My biggest takeaways:
- LLMs don’t “know” things – they predict language. I think this was my biggest gotcha. Before where AI felt magical now it just reads as a game of probability.
- RAG is a game changer. I’ve already built API integrations with OpenAI, but now, I realize how that work was barely tapping the surface. Where LLMs fall short on knowledge and accuracy, RAG is what grounds them in real, trusted data. It’s the difference between a clever demo and a system you can actually rely on.
- Security hasn’t magically changed just because AI is involved. You still have to think defensively. Prompt injection, malicious inputs, access control – it’s the same mindset as traditional application security, just applied to a new surface area.
- Observability is not optional. Keeping a human-in-the-loop is necessary to improve quality of LLM responses over time.
I’ve been documenting my learnings as I go. Check out other blogs if you want to dive in further.
What I’m building next: I’ve started development of a new product. It’s for people who care about maintaining relationships, not for transactions, but to truly connect. The short of it is context-aware prompting of reaching out to key contacts. Excited to apply my learnings to this application!
P.S. If you’re exploring how AI should fit into your product or workflow, check out my AI Product Clarity Session. It’s a paid working session to help teams decide what to build, what to avoid, and where AI actually earns its keep.
