
Do large language models cause dyslexia?
By Louis Rosenberg | Published: 2025-04-11 16:00:00 | Source: The Future – Big Think
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
Like millions of Americans, I am dyslexic. You’ll probably never know, unless you’re sitting in the passenger seat of my car and find yourself screaming “I said left!” And I’m turning weirdly to the right. Hence, if you asked me why I turned the wrong way, you wouldn’t be able to understand that I couldn’t remember which side was on my right and which side was on my left. It’s simply impossible for me.
I know this doesn’t make sense. After all, I have no problem distinguishing between other things. I know from top to bottom. I know black from white. I know forks from spoons. However, I don’t know left from right. My brain isn’t wired that way. This is it It is true for many people with dyslexiaI suspect that MLLMs may suffer from dyslexia as well.
Before I describe a recent study that made me wonder if MLLMs have dyslexia, let me tell you what it’s like to have dyslexia, and describe what I think is going on inside my brain. I’ll also explain why having dyslexia, which makes life difficult for millions of students around the world, can be a cognitive gift that fosters creativity and innovation.
Living with dyslexia
like child suffering from dyslexia, School was very difficult for me. This is because many of the basics students need to learn were created by people who process spatial information differently than I do. For example, we humans created two lowercase letters in the English alphabet – “b” and “d” – that differ only because one points to the left and the other points to the right. For decades, I couldn’t tell the difference. This is very Common problem Among those suffering from dyslexia.
Likewise, many of our mathematics rules use algebraic steps based on the direction from left to right. The same is true To know the time On traditional clock faces – it only makes sense if you know the difference between clockwise and counterclockwise. Calendars are also difficult, because spatial layout is based on the left-to-right direction. as a result of, Follow the rules of mathematics And read the faces of the clock or Calendars These are common challenges for many children with dyslexia.
These challenges don’t end in elementary school. I still remember getting a problem wrong in physics class during my first year at Stanford. There is a simple convention in physics called “Right hand rule“To determine how vectors are oriented. Unfortunately, when I took the test, I used my left hand. This is dyslexia. It has nothing to do with concentration or intelligence – your brain works differently than the people who created the cultural conventions we use in symbolic languages, mathematics, and many branches of science.”
So, how is the dyslexic brain different? I can only speak for myself, but having spent years thinking about this strange mix of strengths and weaknesses that comes from how we process spatial information, I’m pretty sure I know what’s going on. It’s all about the “mind’s eye.” By this I mean the way I visualize things within my mind and store spatial elements in memory.
For most people, Eye their minds Pointed behind the bridge of their nose, and seen Going out into the worldunless they make a concerted effort to move away from this perspective. This makes sense because it’s how our brain receives visual content (i.e. from… First person perspective). But when I remember things in my mind (objects, environments, images, or text), I do not imagine them from a fixed first-person perspective. I think about them in all directions at once, and more vaguely Cloud views One way grounded.
The problem is that if your brain stores “b” from all points of view simultaneously, it becomes an identical symbol for “d.” It’s not that I confuse these two symbols. They’re the same symbol, the only difference is whether you visualize them from the front or the back. The same applies to watch faces. How can you remember the difference between clockwise and counterclockwise if you imagine the object from several directions at the same time?
This brings me back to the large multimodal language models that process and interpret images and videos. These models are great. They can match or exceed human performance in countless tasks, e.g. Diagnosis of cancers Of visual slides better than any human being. And after, Recent study I found a surprising result: all major MLLMs currently struggle to tell time on analog clocks. According to the study, GPT-4o was only able to read watch faces correctly 8% of the time. Claude 3-5 Sonnets were 6% worse. Gemini 2.0 was the best, but still only 20%.
These numbers are surprisingly low, especially when you consider that these AI models can perform well in other contexts. In addition, The same study I found that MLLMs also struggle when they are asked to interpret calendars. This is strikingly similar to human dyslexia, not just in the simple artifacts that cause problems (clocks and calendars), but in the confusing mix of strengths and weaknesses that enable someone like me to get a PhD and successfully work as a computer scientist and engineer, and still fail the “turn left here” test.
Before I went any further, I should have tested this myself rather than relying on the academic paper mentioned above. So, I activated two MBAs and asked them to tell me how many seconds the red hand on the following watch represented:
These are the two responses I came back to:

The correct answer is just under 9 seconds, but both MBAs reported the number incorrectly (11 seconds for Gemini and 12 seconds for ChatGPT). This is a surprising error, especially since both MBAs correctly addressed the problem by looking at the distance from the “2” on the dial.
Now, I’m pretty sure the LLM can “see” which side of the “2” the second hand is pointing at. So why did all the MBAs make this mistake, which happens to be the same type of mistake I would have made as a kid? Well, if you confuse clockwise and counterclockwise, you might say it’s “a little”. past “Monday” if you imagine the hand is moving in the wrong direction.
What makes this error puzzling is how well LLMs perform on other visual tasks. In 2023, she participated in Study of spatial estimation We asked 240 people to estimate from a photograph how many gumballs were in a jar. The average person makes a 55% error. We also asked ChatGPT 4, and it was significantly more accurate, with a 42% error estimate. Obviously, LLM holders can outperform humans at complex visual-spatial tasks, however, the average first grader is likely better at reading clocks.
What does this teach us about current AI systems?
To me, this suggests that LLMs store and process spatial information very differently than humans, sometimes grappling with cultural traditions that assume the viewer maintains a certain perspective. When you ask an AI to do this Interpretation of tissue sample And to evaluate whether it is cancerous, the accuracy is not affected by the trend. But when you ask it to read a clock face, it must view it from a specific direction, otherwise the system will make errors.
In humans, such errors are considered a “learning disability,” and for millions of dyslexics, they create daily challenges, especially for children. However, the ability to perceive the world from unconventional perspectives is also a cognitive gift. This may be one reason why people with dyslexia are often very creative and innovative. In reality, Research studies It has shown that children with dyslexia score much higher Creativity tests From the general public. In addition, many adults with dyslexia attribute their “disability” to their success in various fields.
Personally, I’m sure my career was like that Transformed by dyslexia. In college, this gave me a deep fascination with how people process spatial information and inspired me to pursue a PhD focused on enhancing human cognition by adding virtual content to the real world. This led me to the range of human sensory reactions in air force research laboratory, Where I developed the first mixed reality system, I have been working in the fields of virtual reality, augmented reality and artificial intelligence ever since. I’ve heard many similar stories from people with dyslexia who leverage their unique perspectives to innovate in very different fields, from artists and filmmakers to scientists, writers, and even many athletes.
As AI systems develop, I think we will learn more about the benefits and drawbacks of perceiving the world in radically different ways. After all, we don’t know how intelligent AI systems will eventually become, but we do know that they don’t learn, reason, perceive, or reason in the same way that our brains do—not even close.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ
 





