About

AI is a creative medium.

Most people think of technology as a tool — something you use to get a task done. I think of it as a material — something you shape, combine, and push until it becomes something that didn't exist before.

There's a moment in every creative process — painting, music, writing, engineering — where the material talks back. You push it one way and it resists. You try something unexpected and it opens up. That feedback loop between maker and material is where the interesting work happens.

AI has that quality. When you connect a vision model to a real-time audio pipeline, the system starts exhibiting behaviors you didn't explicitly design. When you train an AI analyzer, the model finds patterns you hadn't noticed. When you build an interactive canvas that maps how your projects relate, you see patterns in your own thinking you weren't aware of.

That's what drives me. Not the technology itself — but what happens when you treat it as a creative partner instead of a tool to configure. The projects on this site are experiments in that idea: what becomes possible when you approach AI with a maker's mindset?

I'm interested in the space where engineering meets imagination — where the question isn't "can this be built?" (usually yes) but "should this exist, and what would it feel like if it did?" That question leads to smart glasses that see and act, to documents that explain themselves, to audio that reveals its own authenticity.

How I think about building

Start with the question, not the stack

Every project here began with "what if?" — not with choosing a framework. What if glasses could be an AI agent? What if dense financial documents could explain themselves? The technology follows the idea.

AI amplifies creative range

Five years ago, building a real-time multimodal agent was a research lab project. Today one person can do it in a weekend — if they think of the AI model as raw material to shape, not a black box to call.

Systems thinking is creative thinking

A pipeline that moves audio through detection, transcription, and embedding extraction isn't just engineering — it's composition. Architecture is a creative discipline.

Ship the ugly version, then make it beautiful

The fastest way to know if an idea works is to build it — messy, hardcoded, no tests. Then, if it has legs, you refactor, polish, and add the craft. Both matter.

What creating with AI looks like

People talk about AI as if it's one thing — a chatbot, a classifier, a generator. But AI is dozens of different materials, each with different properties.

Vision models

Give a system eyes. Stream video to a model and suddenly it can describe what it sees, read signs, recognize context — all in real time.

Language models

Not just chat — structured analysis. FinSight uses Claude to decompose 300-page SEC filings into risk flags, key metrics, and comparative insights.

Audio models

Voice is identity. Speaker embeddings capture who someone is; spectrogram analysis reveals whether audio is real or synthetic.

Embedding spaces

The hidden geometry of meaning. Face embeddings for recognition, word co-occurrence graphs for semantic analysis — new ways to see invisible relationships.

Real-time pipelines

Streaming audio and video through models in real time, under latency constraints, is a design challenge. The constraint shapes the creative output.

Interactive canvases

Visualization as thinking tool. The /lab canvas didn't just document my projects — building it revealed connections I hadn't consciously made.

Let's create something

I'm drawn to projects where AI isn't a feature bolt-on but a core creative material — where the technology enables something genuinely new, not just faster. If you're working on something like that, I'd love to hear about it.