Web3 CMO Stories

Exploring the Convergence of AI and Blockchain for Predictive Power with Jordan Miller – The Satori Network Vision | S4 E06

May 03, 2024 Joeri Billast & Jordan Miller Season 4
Exploring the Convergence of AI and Blockchain for Predictive Power with Jordan Miller – The Satori Network Vision | S4 E06
Web3 CMO Stories
More Info
Web3 CMO Stories
Exploring the Convergence of AI and Blockchain for Predictive Power with Jordan Miller – The Satori Network Vision | S4 E06
May 03, 2024 Season 4
Joeri Billast & Jordan Miller

Send us a Text Message.

Jordan Miller is the mind behind Satori Network, a groundbreaking project that merges AI with blockchain to predict the future. His journey spans from philosophy and information systems to the forefront of crypto AI, offering a unique glimpse into the power of decentralized technology.

In our conversation, we explore how Satori Network aims to create a decentralized AI solution focused on time and future prediction, leveraging the interconnectedness of all things in time to improve predictions across the network.

  • The philosophy behind Satori Network and how Jordan's background in philosophy and information systems influenced its inception
  • How Satori Network implements principles inspired by the human brain, such as repeating neural circuits, unique redundancy, and hierarchical structures
  • The importance of decentralizing AI to distribute its power and ensure diverse perspectives
  • Satori Network's approach to using raw data and the actual future as the error metric, rather than curated datasets, to avoid biases and propaganda
  • Jordan's vision for Satori Network's development, including a public good phase focused on broad predictions and a private prediction marketplace for individual data streams
  • The potential for Satori Network's future predictions to aid decision-making and help societies adapt more quickly to emerging challenges
  • How users can download and run Satori Network on their machines, contributing to the network's growth and accuracy

This episode was recorded through a Podcastle call on March 26, 2024. Read the blog article and show notes here: https://webdrie.net/exploring-the-convergence-of-ai-and-blockchain-for-predictive-power-with-jordan-miller-the-satori-network-vision/

Show Notes Transcript Chapter Markers

Send us a Text Message.

Jordan Miller is the mind behind Satori Network, a groundbreaking project that merges AI with blockchain to predict the future. His journey spans from philosophy and information systems to the forefront of crypto AI, offering a unique glimpse into the power of decentralized technology.

In our conversation, we explore how Satori Network aims to create a decentralized AI solution focused on time and future prediction, leveraging the interconnectedness of all things in time to improve predictions across the network.

  • The philosophy behind Satori Network and how Jordan's background in philosophy and information systems influenced its inception
  • How Satori Network implements principles inspired by the human brain, such as repeating neural circuits, unique redundancy, and hierarchical structures
  • The importance of decentralizing AI to distribute its power and ensure diverse perspectives
  • Satori Network's approach to using raw data and the actual future as the error metric, rather than curated datasets, to avoid biases and propaganda
  • Jordan's vision for Satori Network's development, including a public good phase focused on broad predictions and a private prediction marketplace for individual data streams
  • The potential for Satori Network's future predictions to aid decision-making and help societies adapt more quickly to emerging challenges
  • How users can download and run Satori Network on their machines, contributing to the network's growth and accuracy

This episode was recorded through a Podcastle call on March 26, 2024. Read the blog article and show notes here: https://webdrie.net/exploring-the-convergence-of-ai-and-blockchain-for-predictive-power-with-jordan-miller-the-satori-network-vision/

Jordan:

You should just have the raw data flow into the system and it learns how to predict the raw data. It's what actually happens in the future as the error metric, and I think that's important. We can't get the actual truth from a language model, but we can build models to predict the future and that can get us as close as possible to the actual truth.

Joeri:

Hello and welcome everyone. Welcome to the Web3 CMO Stories podcast. My name is Joeri Billast and I'm your podcast host, and today we are joined by an exceptional guest, because it's really interesting what he's doing, it's really fascinating, and that's Jordan. Hi, Jordan, how are you? Hi, how are you? I'm good, like I said, excited about what I'm going to learn from you guys. If you don't know Jordan, Jordan Miller, he's the mind behind Satori Network and that is a groundbreaking project that merges AI with blockchain to predict the future, and Jordan's journey is actually from philosophy and informational systems to the forefront of crypto, AI, and that offers, for me, a unique glimpse into the power of decentralized technology. So, Jordan, welcome. To start off, yeah, can you give us an overview about Satori Network and its ambitious goal as a future product?

Jordan:

Sure.

Jordan:

So we've been working on Satori for about two years now and we just entered beta.

Jordan:

So the first thing to know about Satori is that it's a decentralized AI solution. There aren't very many of those, but we focus on time and future prediction and in that domain, we believe that's really suited or well positioned for a decentralized AI solution, and the reason for this is if you're building a model, for instance, of I don't know some model that takes in images and puts out words or something, and so your model can detect if there's a dog in the image, If you output that prediction that there is or is not a dog in the image you received, it may or may not probably not help other models that are looking at other kinds of images learn what they need to learn. But with the future or with temporal predictions since all things are connected in time we can share our future predictions on a network and have those future predictions help other models train themselves and get better at what they do. So AI future prediction, I think, is a really good place to do decentralized AI, and that's our goal.

Joeri:

Yeah, it's really fascinating how decentralized can also help for that. But what's also interesting for me is, of course, you have your background in philosophy in informational systems. Yeah, how did that influence the inception of Satori?

Jordan:

When I started going to college I didn't really know. I had no idea what I was going to do and in fact that was a huge point of anxiety for me growing up, like people would ask what do you want to be when you grow up? And I had no idea and I thought I'll figure out by the time I get to college. And I still had no idea. I started taking courses and I started just exploring and I took all kinds of courses because I was exploring and I took genetics and I really found the brain to be really interesting. I took lots of philosophy, logic.

Jordan:

I've always been interested in science, so I tried to do all the sciences. I found chemistry to be the hardest one. I had real trouble with that, but I eventually figured out that what I was looking for was how data or information as it flows through a system. How does it change that system and how does the change in that system then reflect back on the kind of data that comes out the other side? Naturally, I was very interested in all the brain kind of stuff that I was learning. Studying the brain how does it create intelligence? Learning studying the brain how does it create intelligence? It takes in a lot of data and then it outputs action, it outputs motor behavior, and that motor behavior affects the kind of input that it receives on its sensors the eyes, the ears, the skin and I was very interested in how information flows through systems really, and that led me to the brain and trying to figure out how intelligence works.

Joeri:

Yeah, so that's obviously. You also studied that how the brain generates intelligence? Yeah, how did these insights, how, yeah, did these insights help you shape the development of Satori?

Jordan:

Ah yeah, if you ask how does the brain do it, people are like, well, it's very complicated and it's too much, and that's true, and there's a lot we don't know, but there are some things we do know that aren't really well-known everywhere. I would point people if you really want to know how does the brain generate intelligence, as what are the fundamental principles, I would go see Jeff Hawkins in his book On Intelligence. It's really good, and so I would say that, out of everything, that was probably my biggest inspiration was his book, and in it I learned that what we have in the neocortex is a repeating series of circuits. So we have a little unit of a few neurons right, and these neurons talk to each other in a particular way. They're a circuit, but then that circuit repeats. There's another repetition of that right next to it that it's connected to, and so we have this sheet of this neuronal circuitry. It's a repeating pattern and that is one of the things that I tried to implement in Satori a repeating pattern. So what we have is, when you download the Satori neuron, you would download the AI engine. So all of the AI engines are basically the same. So they're a repeating structure, but because of the data that they ingest is different on every single one of them, and this happens in the brain too, because the data is different. One region of your brain gets auditory data and another region gets visual data. They start to specialize in that data, so they change a little bit. So one of the main principles of how the brain generates intelligence is that it uses this unique redundancy. There's a lot of redundancy, so the same unit repeated over and over again, but they all specialize and so they all become a little bit unique. They're a little bit different from each other. So we've tried to implement that pattern on this global network of predicting the future.

Jordan:

The other thing is we noticed that if I'm predicting the future of something and I'm taking in a lot of data in order to do that let's give an example. Maybe I'm predicting the price of gold. I might be ingesting the price of silver, the price of wheat, other commodities in order to find correlations and make the best prediction I can of the price of gold. So if my neuron is doing that, my Satori neuron, that's what we call the download, the AI engine I can then export that prediction of the price of gold. And if you're trying to predict anything that's related to it, say the price of silver. You can ingest my prediction and basically leverage all those connections that I've made, all that work that I've done. You can leverage that without having to go grab and correlate the price of wheat with the price of gold and then see how that correlation affects the price of silver. You can just use my prediction and essentially leverage.

Jordan:

So so what we have is a small two-layer, three-layer hierarchy, and that's another major component of intelligence in the brain is that it creates hierarchies. Companies are hierarchies and we say the same intelligence pattern instantiated in companies and organizations as well. The lower levels are focused on the moment, right now. I got to move this product from the back to the front. I got to do this. We have short time frames and they're in the day-to-day. They understand all the minute details, patterns that are very complicated down at the bottom and then at the top you get to the ceo level and they're looking five years into the future. They're broad. We see this kind of pattern in the brain as well. We have our frontal cortex. That's broad, planning for the future, and then we have all these low level pattern detectors. So we a hierarchy there and we try to instantiate the beginning of a hierarchy in the Satori network by being able to leverage predictions.

Jordan:

And I'd say the last thing that I drew a parallel between Satori and the brain is that the brain is trying to predict the future all the time, everywhere, all at once. That's what we do subconsciously In order to recognize that, oh, somebody got a haircut, in order to find any anomaly. We're predicting the future and then comparing subconsciously our prediction with what we actually see, and then, when we get the comparison, we say oh, you got a haircut, it looks great. We're always doing that and that is the main thing. Prediction of the future is the main thing that allows our brains to be efficient and to anticipate what's going to happen and already be in place, already be ready, have our hand ready to catch the ball when it comes to us. So, so I would say that prediction of the future itself is the most fundamental layer of an intelligence that exists in time, and that's what Satori is trying to be for the world.

Joeri:

It sounds like unbelievable people could say. What is then the accuracy if you do these predictions right? Does it depend on how big your network is, how big the intelligence is in there? Maybe you have some examples that you can give us. Did you say currency? Accuracy? Accuracy, oh, accuracy, sorry, yeah, so I'm not a native English speaker.

Jordan:

The accuracy is low. Right now we're in beta. We have a very basic, prototypical AI engine and we need to improve it as we go. One of the philosophies that I've taken with this entire project, and anything that I talk about basically, is essentialism. I try to do only what's essential, and so at the very beginning of this project, the essential thing was to get a feedback loop AI engine so that, as data comes in, the AI says, oh, I can now predict the next data that will come in the future, and so that's a trigger for prediction and all the time every day, 24-7, it's producing models that try to predict the future better than the previous model it created. So that's all it's doing all the time, but it's very prototypical. So we instantiated the feedback loop, the mechanism, but it's very basic, so the accuracy is not great, but it's going to get better as we develop that and it will get better as the network scales. So we're in beta. We only have a small, very small network and a very small community too. So we're in beta. We only have a small, very small network and a very small community too.

Jordan:

So we're in beta and maybe in six months to a year we will launch and begin to scale and that will also improve our predictions, because the more predictors you have, the better you can leverage each other's predictions and find the right correlations. Also, since we're doing this unique redundancy, we have multiple predictors on everything that gets predicted three or more Then that's partly because they're competing to make the best prediction. That's one way in which we try to get better predictions is they compete, and so once you have that same pattern unique redundancy, competition that's where the wisdom of the crowd kind of comes out of. As far as us humans, we're all pretty much the same but have different backgrounds, and since we have this unique redundancy instantiated in us, we get this wisdom of the crowd phenomenon where you can just average our answers, and some are wildly wrong and some are really close, but an average is really pretty good prediction. That's the goal too, is we? As soon as it starts to scale, we can just average the answers and get a pretty accurate prediction.

Joeri:

Yeah, of course, I can imagine, if you're in battle, that you can improve, but what I also can imagine is that people already, with what we have today, that we know about AI, people start a bit of getting scared about what is all the power, what is possible, but the story that you tell, with these predictions and so on. What do you say to those people that are a bit scared of everything that's happening?

Jordan:

I'm not scared of AI at all. What I am a little bit scared of, though, is the power that AI brings to individuals and companies, and so I think it's really important to try to decentralize any of the AI that we can. And that happens naturally to an extent, because there's always competition there's not just one language model so we're always going to have some competition and decentralization, but the more we can get, the better, and so when I saw that the future prediction is perfect for decentralization, I wanted to do that one right away, because I think the earlier we can start decentralizing these things and distributing them. We want their power, what they allow us to accomplish. We want their power to be decentralized, but we also want the power to be decentralized, but we also want the production to be decentralized, and this is so that everybody has a voice. If you download Satori, in technicality, you don't have to use the AI engine. There's no way to enforce it, in other words, so you can just make predictions any way you want, and predictions are a very powerful thing because, if they are at all accurate, they start to become self-fulfilling prophecies Not all the time, but they can. They have that potential, so they have the potential to give the predictor the ability to change the future, and that's a really big power that we don't want a centralized entity, whether it's a government or corporation or an individual. We don't want them to have that. We want to decentralize this as fast as possible and I think that's really the solution to all these AI concerns.

Jordan:

Another AI the way we've built AI like AI is right now. It's just language to people. That's what it is right. The very basic idea of AI today is computers can talk. Now that's what AI is.

Jordan:

A few years ago, AI was like cool GANs and image processing kind of stuff. It would tell you what's in an image or it could create one for you, and that's what AI was until language took it over. We found large language models and we're like oh, this is important. So we train the large language models, with humans deciding what language it should ingest its content, with humans deciding what language it should ingest its content. What's the? We curate a data set for the AI to train on. Now we curate another data set for it to be to have as its error metric and we can't help but put our own language and biases and everything we don't really want mind viruses, our propaganda, our economic incentives. We can't help but try to manipulate it to whoever's creating its ends right, and we try to. Maybe they try to do that a little, try to avoid doing that a little bit, but they can't help. They're going to do it to an extent.

Jordan:

And so now, with future prediction, though, you don't have to curate the data. In fact, it's best if you don't curate the data. That's not something you should do. You should just have the raw data flow into the system and it learns how to predict the raw data, and so you use the actual future as its error metric. You don't use our own groupthink or what we think is right, or here's how you say nice things. We don't use that as the error metric. We use the truth, what actually happens in the future, as the error metric, and I think that's important. That's a way to get AI to give us the actual truth. We can't get the actual truth from a language model, but we can build models that predict the future and that can get us as close as possible to the actual truth, ben, I now hear you talk about the combination of AI, and people are worried about AI because it's a centralized organization that built them.

Joeri:

But that decentralization that you have in what we call Web3 or blockchain and so on I always say that these technologies come together, but you are building it really makes sense of a thing. So I am also wondering how do you envision, then, a world where Satori's predictions are just a part of our daily decision-making, both for our individual lives and maybe on a social level?

Jordan:

I came up with kind of a roadmap for what I think the development of Satori will be long-term, but in reality I don't know exactly. But in reality I don't know exactly, so I can give you my vision of what I think is going to occur, but it's this. So the first phase of Satori is development and scale. So we build it and then we start to scale as much as we can and during this phase we're going to try to be as focused as we can on the big picture, and that just makes the most sense. If you want to be able to predict everything in the world, you have to start with the broadest data streams. Instead of predicting one individual's sentiment on their Twitter stream or something, satori Network will probably predict stuff like the CPI, stuff that affects everything, and we're going to do that and build a good foundation of understanding the world, and that includes our civilization and our environment basically Government statistics, the market, our culture and the environment everything and the environment, everything. So a broad base. After we get that broad base and we're focused on that, we can then correlate what we've learned about our world with individual data streams from companies or from individuals, from governments things that they might want predicted but don't want a public prediction. So in phase one, everything is public, everything that the miners predict are absolutely public. In phase two, we add on another layer to that bottom layer. We say, okay, now if you send us your data stream, your quarterly sales, whatever it is, and you want to know the future of that data, we will then de-silo your data from the rest of the world. We will correlate it with everything that we understand and in doing so, we can provide the best prediction of the future of your data, and we'll provide that just to you. So in phase one, we build something called the Satori Public Good, public predictions of our world. In phase two, we build a private prediction marketplace private predictions for any companies or individuals that want to correlate their data with the world's data.

Jordan:

That's how I see the future going. I don't know what that looks like exactly individually. It seems like people will want to know the future of their own data because as soon as they get that, they can then decide I'm going to change my behavior because I don't like this future, and so that's what's going to happen on an individual basis, whether that individual be a person or a government or a company, and I think that'll increase our efficiency, and I also think that that serves as a good analogy for what our civilization will be able to do as a whole. With that public prediction, we can say, oh, we see a disaster coming before it comes. We don't have to get blindsided by a new housing crisis or something like that. Right, we see it coming before it's as it's developing.

Jordan:

We see it coming and we can say, oh, we need to put this in place, change this law, do it, and we'll be able to see what we need to do in order to avoid disaster on a civilization scale. And so that's really what I'm probably the most excited about is that if, once, we have something called a future oracle, something that predicts the future quite well, generally speaking, then we will stop bickering and fighting and believing different things. We will say, oh, this is the most likely future if nothing changes. So why don't we change? And we'll get a different future that we all like a little bit more, and I think it'll allow us to adapt more quickly to what the future holds.

Joeri:

Yeah, it will also make it easier to take decisions, make decisions if you're in a certain situation and people are discussing. And because they are discussing, they are not taking any decision and because it's decentralized, so you can also trust it more. The more bigger the network, as you said, the more it's scaled, the better the predictions will be. You mentioned that people can also download Satori already today. What is the, I would say, the action plan or the step-by-step plan? If they want to use Satori, how can they do that?

Jordan:

Sure, they just go to satorinet. io, the website, click download. It runs inside Docker. So they have to download Docker, install it. That's all they have to do. Then they download the Satori installer, which runs it inside of Docker. It's isolated from the rest of their machine. Doing it that way, it will just run inside of Docker 24-7 once they turn it on, and so we wanted it to be a one-click thing where you just install it once and you're done, and that's the goal.

Jordan:

A lot of these distributed AI systems have the same problem. Some of them don't, but a lot of them basically make the miner the person that downloads the software. They need continual labor from that person because they use that person to build the models. They say we're a platform where data modelers can come together and build models and compete and do that manually, and I think they tend to have this problem because they don't narrow their focus in on some domain and just do that one thing. If they did, then they could do what we're doing. We've narrowed in on temporal prediction. We can build an AI engine that generates models automatically and we don't need data modelers to come in. They're always welcome and they can help improve the engine. That'd be awesome, but we don't need an army of data modelers to run the Satori network, unlike some of these decentralized systems. We just need people to download it so that it can scale that way.

Jordan:

So, yeah, just download it and run it. It runs in a browser. You can see the UI and that's it Now. It'll be in beta right now, so it will actually earn test tokens, but they're just test tokens. They're worthless. It's just for testing, making sure all the systems are working. Hopefully, when we launch, you won't have to re-download it. That's the goal With beta. You never know. But hopefully it'll just have a seamless transition and if you've downloaded it now, it'll be great though.

Joeri:

That sounds easy. And if people still have questions, where can they go to? Can they contact you or someone of your team?

Jordan:

Yeah, so our community is mostly on Discord, so we have Discord links on the website. You can hit join and see all the links that we have on that page Discord, twitter and Reddit, I think. But yeah, we're available mostly on Discord, great.

Joeri:

Great, like my listeners know, there are always show notes for every episode. All the links that you mentioned will be in there your Discord website and so on. So, Jordan, thank you. The time went really fast. I'm really excited to learn more, to try it out myself. So, thank you so much. Thank you, I appreciate it. Guys, like I said, this is an episode that makes your head spin. I also watched the videos on Jordan's website, which gives even a bit more information. There is also his Discord, and, if you think that this episode is also useful for people around you, yes, be sure to share this episode with them. If you're not yet subscribed to our podcast, this is a really good moment to do this and, of course, I would like to see you back next time. Take care, thanks.

What's the overview of Satori Network and its ambitious future product goal?
How did your background in philosophy and informational systems influence the inception of Satori?
How did your studies on how the brain generates intelligence shape the development of Satori?
What is the accuracy of predictions when considering the size and intelligence of the network? Does it vary based on these factors? Can you provide examples to illustrate this?
What reassurance or guidance would you offer to individuals who are apprehensive about the current developments?
How do you envision a world where Satori's predictions become integrated into our daily decision-making processes, both at an individual level and on a societal scale?
What is the step-by-step plan for individuals who want to use Satori? How can they download and start using it today?