Designing with friction: A cure for AI-induced cognitive atrophy
Over the past year I built a generative AI tool that hooked me on high-convenience, ultra-processed article summaries. It made it easier to consume ideas, but didn't help me think more intentionally or more critically. Now, I'm rebuilding it as a high-friction tool to counter AI-induced cognitive atrophy. Only when I built a connector giving Claude Desktop access to my tool did I discover I'd been approaching it all wrong.
18 months ago I was interested in using AI as a second brain. I tried early versions of mem and Reflect, but found they didn't really work for me. So, I built a simple, AI-supported reading app to help me capture and cross-reference content from across the web.
For 12 months, the app served me well, generating condensed overviews, extracting highlights and themes, and connecting disparate content, as well as capturing my own thoughts.
After a while, I struggled to get value from my captured content. I was using the app as a crutch rather than a growth tool. I was relying on the automated summaries more than I had intended, and having to hunt down content to read again in order to recall insights. Text and conversational search didn't really deliver the depth of understanding I had hoped for. I wanted to internalise the content I saved so I could build the knowledge myself. Instead, the AI was reading it for me and spitting out a middle-of-the-road overview, and I wasn't getting much out of it. Since launch I've written notes against fewer than 3% of captures. I knew I hadn't hit the mark, yet.

Building an MCP server for the app led me to the realisation that I was misusing AI. Giving Claude access to what I'd saved meant the content was able to break free from the bounds of my app. When working on a write up of a new product feature, Claude connected it back to the MAYA principle, from an article I had captured many months prior. I hadn't thought about it myself, but Claude, using my app's MCP server, drew the connection I'd missed, contextualising the content, not just summarising it.
A growing body of research shows that heavy use of LLMs can dull reasoning and memory through "cognitive offloading", the outsourcing of thinking to digital tools. Experts fare better than novices, but the evidence shows that less cognitive struggle leads to diminished opportunity for mental engagement and critical analysis.
In my own experience I've felt the dopamine hit of getting an instant article summary, and a quick answer to make me sound smart. My app today feels like ultra-processed food, it can be satisfying in the moment, but hollow over time. The research made me consider what a tool that actively demanded cognitive attention might look like.
So I'm pivoting: from a capture-and-summarise reader tool, to one designed to make me smarter. I want an app that acts as an antidote to cognitive atrophy, one that is actively supporting me, not subtly undermining me.
What might that look like in a product? My top priority is to avoid over-reliance by limiting access to AI-generated insights until I can demonstrate understanding and critical thought around a piece of content.
To achieve this, I'm starting with an experiment that prioritises high-friction "productive struggle" over instant gratification, requiring me to write a short reflection of the content before it gets stored in my app's long-term memory (only long-term memory content can be surfaced in the future). Writing is a powerful tool to help us think, and there is evidence that intentionally adding latency may "scaffold deeper cognitive processing". Using writing as a gate means if I want to find and use this content later, I need to engage with it now, experimenting with how AI agents can assess that writing using a panel of tailored experts.
There's an inherent tension here in using AI to reduce my reliance on AI. The writing requirement forces cognitive engagement. I'd rather an AI push me to think deeper than do the thinking for me.
How will I know it works? In the experiment I will be monitoring:
- Capture rate – does friction discourage me from capturing content?
- Usage – does overall app usage change?
- Recall – do I retain and apply more of what I read?
If you have thoughts or ideas for me, or on the wider domain I'd love to hear them. If you enjoyed this post, please consider sharing it on Hacker News.
P.S. I am not making my app generally available right now, it is designed for me (Selfish Software as Edmar Ferreira calls it), but if it's something you do think you'd use, please let me know!