Disclaimer: The views and opinions expressed in this blog are entirely my own and do not necessarily reflect the views of my current or any previous employer. This blog may also contain links to other websites or resources. I am not responsible for the content on those external sites or any changes that may occur after the publication of my posts.
End Disclaimer
“And I discovered that my castles stand, Upon pillars of salt and pillars of sand”
-Viva La Vida
This isn’t a rant. This isn’t a lament.
This is just a little red flare shot up into the sky, a sad little song played on the world’s tiniest violin.
Something’s rotten in the State of Interviewing, and well beyond that.
Something’s beginning to fray at the Venn diagram intersection of Epistemology and Knowledge.
LLMs are changing the idea of what it means to know something, or at least to think we know something.
Bite-sized, easy to acquire, and ephemeral bits of temporary knowledge.
How do we know what we know?
Can something be considered "knowledge" if it’s not permanently understood or retained?
Knowledge- the dictionary definition, is the work being done to get there- “facts, information, and skills acquired by a person through experience or education”.
This is not that.
To use an imprecise computer analogy- much more like RAM, much less like SSD/HDD
The data gets lost when power is turned off.
How much of this process will people allow to be abstracted away ?
People cheat a lot during interviews now a days by using LLMs to get the answer to anything.
Correct technical answers come fast and furious on voice calls.
Video calls have a way of having “technical difficulties” right at the moment when you ask the person a hard question. The technical problems seem to fix themselves after about 10-15 seconds. Coincidently the person seems to have come up with a good answer during the blackout.
You need to have them in a room with you, two chairs, two people, air gapped and laptops closed.
Cheating is sort of a big waste of everybody’s time- right?
I recently asked a interviewee about sparse encoders.
The person hemmed and hawed, said, “ I know about encoders, but sparse encoders…hmm…”.
They then took a deep, long(er) than usual pause/breath and came back with an unbelievably timely, curious, and some would say, suspiciously prescient and precise choice of words.
After their pause, they composed themselves and began with saying that “A Sparse Encoder attends selectively to only some parts”:
Why were those particular selection of words so surprising to me?
Because at the same exact time, I was looking at those words too:
Is it possible that they had previously used GPT as a study guide and memorized the exact same sequence of 8 words including starting with “attends selectively”?
You could say there is a “non-zero probability” of choosing that sequence.
But I call B S.
For the record, and from my perspective, there are a lot more ways of getting this random-ass sparse encoder question approximately right (scaling, efficiency, model interpretability, or “well I know what a sparse autoencoder is.”) rather than exactly wrong (a correct answer through cheating).
Even just saying “I don’t know” is a better answer provided the rest of the interview isn’t riddled with these.
But, wait hold on…
This is straight up cheating, right?- getting answers to questions you didn’t have the answers to for use as a proof of knowledge.
Yes, clearly, but this is also happening in the year 2025.
And this is where things start to get funky dunky given the state of the world.
What does proof of knowledge constitute in 2025 and how much wiggle room is there?
LLMs with access to the history of the internet’s information are now ubiquitous, commoditized, and available for free.
My wife posed this question to me after I mentioned to her some of the LLM cheating interview stories:
Is this the new normal?
In some ways, isn’t this an (un)intended side effect of the new LLM information retrieval, ubiquity and speed?
Aren’t there billionaires working on augmented knowledge through neural implants as we speak?
What will be the difference then, between people typing to find the answer as quietly as they can on the other side of an interview, versus literally getting the answer sent to them as a thought? “Hey no neural implants allowed”- need to show proof of doing the interview from a Faraday cage (or whatever they have then).
Good luck.
What happens to a person’s actual knowledge base?
Does it remain always on call, always augmented, always supplemented, always just a button tap away?
What will it mean to “know something”- to be the expert in the room when the barrier to entry for expertise evaporates?
What is an expert in the age of LLMs?
Maybe it will be okay.
But if I’m being honest, I’m sort of nervous about how this plays out.
I like the current process of it being an actual process- of having to study and cobble together pieces of information from disparate resources towards the goal of becoming “learnèd”.
The times up at bat, the development of what can be considered domain expertise, the heavy learning that comes through trying and getting the answers wrong.
Creating a knowledge base through trial and error and stitching together a latticework of similar pieces and not so obvious pieces- the pieces an LLM would never see in it’s distribution.
There is a drudgery in this process that somehow feels like knowledge glue.
You know…work.
That’s a huge difference from the direction we are headed- inch deep, inch wide- goldfish memory knowledge castles built on pillars of sand, from things, we just a second ago, asked an LLM.
Things that we, just a second ago, “learned”.
Whatever that means.
Don’t slow down.