
Volume 2. Issue 3.
I thoroughly enjoyed this article about Emily Bender this weekend by a brilliant journalist with whom I’ve just become acquainted, Tasmia Ansari. I’ve been working on some longer-form pieces recently. I am anxious to spell out and explain why so many of us with cognitive science and cognitive linguistics backgrounds feel frustrated with what’s happening with Open AI right now. The model they use has been around for a long time, and no one ever argued hard for its ability to create AGI. Some even protest it being called weak or narrow AI, depending on your definition of “intelligence.”
As early as the 80s and ’90s, there was a call for second-generation cognitive science that would provide a more satisfactory model for intelligence that could–perhaps—aid in modeling computational models that might achieve human intelligence with more verisimilitude than this stochastic typewriter that plays Mad Libs reasonably well. For those of us who write full time, you start to recognize it’s Grammarly with a lot of raw, poorly structured, ill-gotten, non-normalized data up its posterior. The NLP search is powerful. The rest? Unstable and showy but unreliable enough, and we’ve been doing the things it’s good at for ages, like protein folding and extracting intelligence from images. If you want to read my hero and who I modeled myself on, check out Lucy Suchman. She needs to be credited more, but she has influenced many fields.
Emily doesn’t directly say why being able to express the systematicity of language computationally with some degree of efficacy is inadequate to be called “intelligence.” But you can feel the same frustration many of us think in the money quote at the top of the article.
“The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large,” she argues.
When I started working with founders as a consultant or coach, I brought my deep study into start-ups and technology fraud. (At the time, I expected that the fraud case studies I had pored over were the exception and not the rule; however, I learned differently the deeper I got into the work. )
All founders have a “reality distortion field.” Steve Jobs is a famous example. It combines charisma, emotional influence, hype, and stubborn denial regarding lousy news. Every founder and person who brings anything new into the world must have a reality distortion field of some kind. It’s a protective layer that guards against the fear of failure that stops most people from starting anything. In the case of Jobs, he had people who could bring him back to earth (sometimes), and legitimate innovation and attention to design made it fated to succeed. In the case of Elizabeth Holmes and Danny Neumann, there was a reality distortion field and virtually nothing else. In these cases, it can reach the point of pathological factitious disorder, and they will develop coercive control techniques to avoid the reality distortion field from being entered and destroyed with things like facts, realities, numbers, and real-world outcomes. Sadly, it filters out the human impacts on all connected to the field. Most human-centered researchers and designers have experienced these leadership distortion fields. We find ourselves very unwelcome within them.
I’ve started two orgs in my life. I have begun one multimedia performance company that changed names several times, and I have started a consulting company that changed names several times and rebranded. I might call Singular XQ a third one, as transitioning my consulting business to a nonprofit has significantly transformed the way we do business. Enough to say this is a de novo effort.
In any event, I will say this: I am on the butt end of attacks, criticism, and receiving a lot of unsolicited advice constantly. People frequently misunderstand what I am doing and why I am doing it, and often, comments are backhanded compliments or slights. “No one will care about your mission if you can’t 10x their money.” It takes a lot of resolve to grit your teeth and keep going, and it takes some filtering to know when you should listen to feedback and when it’s just side-swipes from jealous or threatened actors or people who need to cut others down. It creates, at times, a reality distortion field out of necessity. You have to play up the possibility to fight the tsunami of sentiment and voices who believe what you are doing to be impossible.
For example, without significant capital, a big name, and many big-time connections, I believe can start a nonprofit of concerned, intelligent people who want to conduct research and experiments in emerging technology for the common good. I can persuade people with highly paid skills to donate that time and work for the public, even when they can earn significant money in the commercial sector. And that effort over time will become enough to provide a public counterpoint and consumer protection digital commons that educates, innovates, and sustains a robust, open-source innovation eco-system
That’s, well, ridiculous. Or, as some say, “tilting at windmills.”
Nevertheless, I persist.
In the meantime, these past nine months have been a pervasive and contagious reality distortion field where people- including myself- were initially brought into the hype to such a degree that you start to wonder if the reality distortion has become permanent. Open AI and its black box. Chat GPT’s claims versus its reality. It’s been giving me a lasting dizzy spell. How about you?
I feel that this may become my generation’s Kennedy assassination. Thirty-five years from now, we will be unsealing documents about who knew what and when, who was pulling whose strings, and who may or may not have been a double agent. The irony of “we are the only ones who can prevent this tech from getting into the wrong hands” in the aftermath of the Oppenheimer phenom and having that tech escape into the wrong hands immediately. It’s something that Ricky Gervais should be working on a script for, to be honest.
Ricky, beb. Call me. I got ideas.

Leave a comment