Sergey Nazarov on Why the World Needs a New Model of (Cryptographic) Trust
although is really a podcast: here
I rarely bother with podcasts, as they usually have a low info content, and is much faster to read a transcript.
Anyway, interesting starting point.
Let's start with what society's trust model has been. [...] this difference between a probabilistic world and a deterministic world. So I think many people assumed that they were in a deterministic world where things are guaranteed and where there is no ability to deviate, but really they're in a probabilistic world, where even if you have a password to a bank account you don't actually have control of anything in that bank account, you just have a way to log into some system with a nice logo in it that gives you the illusion of control, basically, or the illusion of safety, whereas deterministic systems like blockchains and smart contracts give you real control and in many senses real safety.
The big assumption here is that such systems are truly secure and truly deterministic; the number of scams and rugpulls shows that crypto-systems are also in a trust-building phase - ironic, really. And that first-level trust is a trust that the system is exactly what it claims to be.
It's about facilitating that new level of reliability and deterministic guarantees which are attractive to certain groups that understand this problem, or have philosophical views, libertarian views, other views, that lead them to seek this solution sooner because they understand the fragility of the systems that they are using that don't have these guarantees.
So what I'm basically saying is that all the agreements that you have in a digital only system in a web2 system, even though you feel you can rely on them, the reality is that in many cases you cannot, and the brands that provide you those agreements want you not to think about that, they don't want you to wonder about your actual relationship with them, because it doesn't benefit them. And so the world has been set up in a way where there's all these big big counterparty risks: censorship risks, tampering risks, adversary risks, security risks, downtime risks, all these risks that are abstracted away from you but that you are exposed to.
This has been true throughout history; believe the priests and the rulers, and all the other believers around you.
Just look at the global fake news - recipes in manufactured consensus, with causes and effects pre-digested and served on a plate for gullible minds. Even many alleged debates are constructed in such a way as to exclude deeper questions that are necessarily off-field, else they expose the false dichotomy being presented.
This isn't necessarily a new philosophical outlook on crypto, although Nazarov feels that more people need to look at this perspective so that engagements with crypto platforms are not just at the financial level. However, I'm not sure that the tokenisation of freedoms is...actually freedom.
OK, lurking inside is a deep and naive misunderstanding: deterministic does NOT mean predictable!
This is a major discovery in mathematics, algorithms and complex systems - fractals are a simple visual example of this.
Linear.
That's what people want - linear and predictable, just like their view of the universe.
Linear would, indeed, be deterministic.
But if you also want algorithms to be reactive to change, then linear will be a disaster, and you then enter the world of complex systems - and sometimes, unpredictable behaviour.
I find it funny that the guy then goes on to talk about AI - which is totally non-linear!!
I'm not sure he has thought this all through - Chainlink may not be amateurs, but they still need to nail down a clearer philosophical trope, with fewer holes.
That, somehow, AI and blockchains are going to "verify truth" is deeply disturbing. We have seen all the truth-mucker sites and their scammy intellectual tricks to trash the truth. Feed all that into a deeply naive AI and we get another tyranny of synthetic truths.
One thing blockchains can do - yet another research project from those nice folks at DARPA - is to verify the process the AI was taking. That process is often unclear, and forcing it through some permanent tracing mechanism means one can look back in slow motion at the AI's conclusions.
I still don't see an AI as a semantic engine - try asking one to find the logical and experimental errors in a research paper!
Or ask one to keep proving every assertion. That an AI can appear as intelligent as the average human is something very depressing about the average human.
I've seen ChatGPT claim to different people that it "cannot do mathematics"! How pathetic is that!? You're telling me that it can code but can't understand actual symbolic logic nor mathematical notation. How dumb is that!? That really does make it a propaganda engine, or a cheap therapist or a pseudo-semantic joke generator.