News
How Blockchain Technology Can Shed Light on Previously Hidden Inputs
Part of the magic of generative AI is that most people have no idea how it works. On some level it’s even fair to say so Nobody is absolutely sure how it works, as the inner workings of ChatGPT can perplex the most brilliant scientists. It’s a black box. We’re not entirely sure how it’s trained, what data produces what results, and what intellectual property gets trampled on in the process. That’s part of the magic and part of what’s terrifying.
Ariana Primavera is one speaker at this year’s Consensus festivalin Austin, Texas, May 29-31.
What if there was a way to peer inside the black box, allowing a clear visualization of how AI is governed, trained and produced? This is the goal – or one of the goals – of EQTY laboratory, which conducts research and creates tools to make AI models more transparent and collaborative. EQTY laboratory Lineage Explorerfor example, it provides a real-time view of how the model is built.
All of these tools are intended as a check against opacity and centralization. “If you don’t understand why an AI is making the decisions it’s making or who’s responsible for them, it’s really hard to ask why malicious things are being emitted,” says Ariana Spring, head of research at EQTY Lab. “So I think centralization – and keeping these secrets in black boxes – is really dangerous.”
Together with his colleague Andrew Stanco (chief financial officer), Spring explains how cryptocurrencies can create more transparent artificial intelligence, how these tools are already being used in service of climate change science, and why these open source models can be more inclusive and representative humanity in general.
The interview has been condensed and lightly edited for clarity.
What is the vision and goal of EQTY Lab?
Aryan Spring: We are pioneering new solutions to build trust and innovation in artificial intelligence. And generative AI is a hot topic right now, and that’s the most emerging property, so that’s something we’re focused on.
But we also look at all the different types of AI and data management. And really trust and innovation are what we rely on. We do this by using advanced cryptography to make models more transparent, but also collaborative. We see transparency and collaboration as two sides of the same coin that aims to create smarter and safer AI.
Can you talk a little more about how cryptocurrencies fit into all of this? Because you see a lot of people saying that “Crypto and AI are a great solution,” but often the logic stops at a very high level.
Andrea Stanco: I think the intersection between AI and cryptocurrency is an open question, right? One thing we discovered is that the hidden secret of AI is that it is collaborative; has a multitude of stakeholders. No data scientist could create an AI model. They can train it, they can fine-tune it, but encryption becomes a way to do something and then have a tamper-proof way to verify that it happened.
Therefore, in a process as complex as AI training, having verifiable and tamper-proof attestations, both during training and afterwards, really helps. Build trust and visibility.
Aryan Spring: What we do is that at every stage of the AI lifecycle and training process there is an authentication – or a stamp – of what has happened. This is the decentralized ID or identifier associated with the agent, human, or machine performing the action. You have the timestamp. And with our Lineage Explorer, you can see that everything we do is automatically recorded using encryption.
And then we use smart contracts in our governance products. Therefore, whether parameter X is satisfied or not, a given action may or may not proceed. One of the tools we have is a Governance Studio, which basically programs how to train an AI or how to manage the lifecycle of the AI, and this is then reflected downstream.
Can you clarify a little what kind of instruments you are building? For example, are you building tools and doing research intended to help other startups build training models, or are you building training models yourself? In other words, what exactly is EQTY Labs’ role in this environment?
Andrea Stanco: It’s a mix, in a way, because our focus is on the enterprise, since that’s going to be one of the first big places where we need to get AI right from a training and governance perspective. If you dig into that, then we need to have an area where a developer, or someone in that organization, can annotate the code and say, “Okay, this is what happened,” and then create a record. It is enterprise-focused, with a focus on collaborating with developers and the people who create and deploy models.
Aryan Spring: And we also worked on model training through the Climate Intelligence Fund. We contributed to the formation of a model called ClimaGPT, which is a climate-specific broad language model. This is not our bread and butter, but we have gone through the process and used our suite of technologies to visualize that process. So let’s understand what it’s like.
What excites you most about AI and what terrifies you most about AI?
Andrea Stanco: I mean, for excitement, that first moment of interacting with the generative AI felt like you had uncorked the bolt in the model. The first time you create a prompt in MidJourney or ask a question in ChatGPT, no one has had to convince you that maybe it’s powerful. And I didn’t think there was much new anymore, right?
Andrea Stanco: I think that’s a concern that’s perhaps the subtext of a lot of what’s going to happen at the Consensus, just taking a peek at the agenda. The concern is that these tools allow existing grantees to delve deeper into the modalities. That this is not necessarily a disruptive technology, but an entrenched one.
And Ariana, your main AI excitement and terror?
Aryan Spring: I’ll start with my fear because I was about to say something similar. I would say centralization. We have seen the harms of centralization when paired with a lack of transparency about how something works. We’ve seen this over the last 10, 15 years with social media, for example. And if you don’t understand why an AI makes the decisions it’s making or who’s responsible for them, it’s really hard to ask why harmful things are being spewed out. So I think centralization – and keeping these secrets in black boxes – is really dangerous.
What excites me most is getting more people involved. We had the chance to work with different types of stakeholder groups as we were forming ClimateGPT, such as indigenous groups of low-income, urban, black and brown seniors or youth, or students in the Middle East. We’re working with all these climate activists and academics to say, “Hey, do you want to help improve this model?”
People are really excited, but maybe they didn’t understand how it worked. Once we taught them how it works and how they can help, you could see them say, “Oh, that’s good.” They gain confidence. So they want to contribute more. So I’m really excited, especially because of the work that we’re doing at EQTY Research, to start publishing some of these frameworks, so we don’t have to rely on systems that maybe aren’t as representative.
Well said. See you in Austin at the Consensus AI Summit.