Is AI making us smarter or dumber?

Is AI ruining our ability to think? Well it depends on how you use it. We draw some parallels to the chess world, and dive a little deeper into particular use cases. The real question - do we even need to think anymore? We're not sure...

Is AI making us smarter or dumber?

I use LLMs a lot - they help me write, code, analyze, and search. While AI helps me get more done, I also wonder if it affects my thinking.

We've been building AI tooling for over a year now, and questions like this guide our thinking on automation vs. assistance and low-ROI vs. high-ROI tasks.

A parallel in the chess world

A few weeks ago, a 17 year old GM, Gukesh, became the youngest ever challenger for the Classical World Championship.

For those who don't follow chess, AI engines have been better than the world's best chess players for more than 20 years. Right now, Stockfish is rated 800 ELO higher than the world's best ever human player, Magnus Carlsen. That means if Magnus plays Stockfish, he's expected to win one game for every 250 played.

Engines are used by chess players to study openings and learn the best moves in every situation. Players prep 'lines' with the help of AI, sometimes memorizing up to 20 moves deep, where they know the objective best move given any move the opponent plays. The engine can calculate faster and deeper, so this has been really impactful on the chess world. It changed how people learn chess and evaluate positions, turned classical chess into more of a memorization game, and updated modern theories on the best moves.

So what makes Gukesh so good at chess?

A popular theory is that its because he didn't train with an engine until he was already a GM.

Unlike most players, Gukesh trained by analyzing positions by hand. Most would consider this a disadvantage, but his coaches believe it helped Gukesh develop a unique playing style and better calculation skills over-the-board. It's important to note that he uses AI to study now, and it's helped push him from just a strong GM to one of the top players in the world.

To the larger question - is AI making us dumber?

Does an engineer who builds systems without Copilot, or an SDR who writes outbound without Clay have a long-term advantage? Does a sales engineer who responds to RFPs without DealPage have an advantage?

The answer is complicated, and I for sure don't have it. For some, the answer is probably yes - if you're copy pasting from chatGPT into your homework assignments or emails, you might be losing some foundational skills that others around you have. But let's go deeper

Writing

When it comes to writing, I've thought a lot about maintaining my 'voice'. LLMs can theoretically write in any voice I ask them to, but they usually end up sounding generic. For folks outsourcing lots of their writing to LLMs, I wonder if their internal voice starts changing or loses some of its uniqueness. AI-generated marketing content sounds crappy and is already being heavily penalized by Google.

On the other hand, if you have a strong voice and can get your thoughts on paper, I think using AI to transform it into different formats (tweet, blog post, etc.), correct your grammar, and suggest edits is a 10x multiplier. The difference is in developing your voice and how you use the AI.

Coding

Is there a use for a junior developer anymore, or does everyone become a systems engineer when AI Agents can complete tickets from end-to-end? Or are there skills and systems knowledge built by working on small coding tasks that are needed to be a good systems engineer? Understanding codebases is such a crucial skill that I worry this is one of the places AI could actually hurt new SWEs, especially new-grads.

For devs who already have foundational skills and an understanding of large codebases, offloading 'small' tasks to AI is a huge unlock.

Brainstorming

AI can be a good starting point, but if you're using chatGPT to brainstorm you're nuking one of the best parts of being sentient - creativity. You'll never get as good of an idea from an LLM, just the common ones.

Search

AI search tools like Perplexity make getting answers to simple questions way easier. But those who use LLMs like chatGPT as a replacement for search are likely losing the ability to measure truth and evaluate sources. This is easily replaced with citations and real world crawling, so the only question here is whether you choose the right tool.

Or are these skills useless in the future anyways?

These skills could be like cursive. In 3rd grade they made us all learn cursive because 'every middle schooler has to write in cursive.' I never wrote in cursive. Is that how we're going to treat cognitive skills in the future? If that happens, pretend you never read this.

In the meantime, I would focus on building foundational skills and a unique perspective - then adding in AI to leverage it as much as possible. At least, that's what I'm going to do.