Recap: Auki Labs CEO Nils Pihl's Live AMA Session in the Aukiverse Discord Community

November 20, 2023

We were thrilled again by the enthusiasm the posemesh team continues to encounter during our live AMAs.

Our latest AMA was hosted in the Aukiverse Discord Community.

Here’s a tweet promoting the event.

📅 Mark Your Calendars! 📅 #AMA session with Nils Pihl (@broodsugar), Founder & CEO of Auki Labs!

Hosted on the Aukiverse Discord with Kevin.

🗓️ Date: Friday, November 17th
⏰ Time: 10 AM UTC | 6 PM HKT
🔗 Join us on Discord: https://t.co/3UvJnA8hb3#posemesh #auki

— The Posemesh Foundation (@Posemesh) November 15, 2023



See more and follow The Posemesh Foundation on x.com/posemesh.

Announcing the AMA in the Aukiverse Discord Community.

For those of you who missed the live Q&A, we’ve got the recap for you right here.

Ready to get up to speed?

Let’s dive in.

The Posemesh Foundation AMA with the Aukiverse Discord Community

Began the AMA session by discussing with Nils the recent pilot initiatives Auki Labs has rolled out in retail stores.

Introduction

Auki Labs is a pioneering force in spatial computing, revolutionizing how we interact with technology and each other.

Get ready to explore the world of spatial computing, discover innovative solutions for retail, and dive into the future of augmented reality.

Today, we're driving into the immersive world of spatial computing, where the digital and physical realms converge to reshape our experience.

Q1: Could you share more about how The Norwegian Rune Poem influenced Auki's mission and vision for spatial computing?

Nils Pihl: Okay, so for those who don't know, we got the name Auki Labs, Auki, from a poem and a song, in a sense.

There's a band that I really enjoy called Heilung that put out a song called Norupo (see video below), which is an abbreviation for The Norwegian Rune Poem.

Music video from Heilung. Song Norupo. This relates to the origin of the name Auki, Auki Labs namesake.

The Norwegian Rune Poem is a very, very interesting piece of literature and poetry. It's a poem that is so old that it has survived in at least three different languages. There's an Icelandic version, there's an Anglo-Saxon version, and there's a Norwegian version.

What this poem is, is essentially a kind of A is for apple, B is for banana kind of poem, but for the runic alphabet.  The runic alphabet, the futhark, was a mix of being both phonetic, like the English alphabet, and pictographic, like Chinese characters.

Related Post: AR and the Power of Poetry

So the way this poem is structured is it starts with a letter and its pictographic meaning, and some words of wisdom about that.

For example, the very first line in The Norwegian Rune Poem, Fé vældr frænda róge, which translates into, “wealth creates discord amongst kinsmen.”

Wealth creates discord among kinsmen, meaning money will make friends fight.

And I think that's a very interesting first thing to say in this poem to help you remember the alphabet. The first thing you should remember is that wealth will create discord among kinsmen. If you don't remember anything else, remember that.

Finally, later on in this poem, when we come to the character that represents humanity, Madr, or in the older version of the alphabet, manna, which is the logo of our company, the Mannaz Rune (ᛗ), the line is, Maðr er moldar auki, which means humanity is an augmentation of dust.

What does that mean?  It means essentially that we are more than just matter. And I, as a memetic transhumanist, absolutely believe this very much.

One of the things that I believe that is very core to our vision and mission with Auki is the idea that our culture, our language, our behavior, and our memes are a more important part of our identity than our genes.

The thing that makes humans interesting and separate from animals are our memes. It's our language. It's our culture.

And for those of you who have been following us for a while, you'll know that I believe that shared augmented reality - the ability to visually manifest your knowledge and imagination in the field of view of other people, in the imaginations of other people - really represents the next big step in human language.  

So when I came across this poem, I thought that Auki would be a very appropriate name for what we're doing because it's a reminder that humanity is more than dust. We are more than matter. But it's also a reminder of the importance of the language of communication and also a little bit of a head nod to my Nordic ancestry, which is nice to remember every now and then even though I've left it far behind and I'm now a Hong Konger.  I wanted to have a little bit of Nordic spice in all of the Asian sci-fi that we're building.

Q2: How do you envision spatial computing transforming our daily interactions with technology and each other, particularly in terms of bridging the digital and physical worlds?

Nils Pihl: Maybe it's worth pointing out first, like what is spatial computing?

Spatial computing is essentially the art of teaching a digital device to understand its physical surroundings, like where it is in space or how far it is from walls.  

You could argue that computer vision is a subset of spatial computing.  In my opinion, that's neither right nor wrong.  But certain kinds of computer vision, certain kinds of triangulation, certain kinds of just helping digital devices, helping machines understand the physical world around them, that's spatial computing.

Spatial computing is necessary both for things like virtual reality and augmented reality.

It's spatial computing that allows virtual reality headsets to work: when you tilt your head, the game world tilts, too, because the digital device understands its movement in the physical world.

And when Tim Cook announced the Vision Pro, Apple's new mixed reality headset, he said something that I think is worth paying attention to (see the video embedded in the tweet below).

He said that just as the Mac introduced us to personal computing and the iPhone introduced us to mobile computing, Apple Vision Pro will introduce us to spatial computing...

When @Apple announced their Vision Pro headset, @tim_cook made a very important point about the future of computing - the transition from mobile to spatial. 🧵 pic.twitter.com/WmhBPusxV1

— Nils Pihl (@broodsugar) August 26, 2023

I'm going to tell you where I agree with this and where I disagree.

I agree that there's something very big happening, the transition from mobile to spatial.  Mobile to spatial is going to be as big, if not bigger, probably bigger than the transition from PC to mobile.

What I disagree with is that the transition is from handheld to face-worn.  That's going to happen too.  And it's going to happen at a similar time frame, but it's not the same transition.

The really interesting thing that's happening is not the change in form factor but the change of where information is and how we interact with it.

From PCs to mobile, it changed how we interact with information. It wasn't just at our desk; it was also in our hands. However, with spatial computing, information starts appearing in its correct physical context.

You can think of the transition to spatial computing as indexing the internet by physical location instead of by topic.  And when the internet starts manifesting itself in physical space, this helps communication in a lot of ways.

Imagine a simple note like, “Water this plant every Thursday.” If that note can actually appear on the right plant, that's a lot more information than just a note that says water this plant.

So what would the alternative be?  If you can't put the information in space, then you would have to describe which plant you mean.  Water the plant in the window every Thursday.

In a way, putting information in its right physical context is like a compression algorithm.  When you put information in its correct physical context, you need fewer words to describe what you mean.

Augmented reality and spatial computing are a way to allow for more rich and precise communication using fewer words.

Following an arrow from point A to point B is a lot easier than following a verbose instruction of turn left, then walk 100 meters, and then turn right until you pass this tree, blah, blah, blah.

So by allowing for new semiotics or new symbols for visual representation of information in a shared way, spatial computing is very likely to transform our communication every bit as much as being able to have chats with each other.

One of the things that I believe, and I really do believe this, and I hope in time to convince all of you of this, is that the transition to spatial computing is probably on the scale of the transition, not just from PC to mobile, but really on the scale of the transition from oral culture to written culture, from written culture to the printing press, from the printing press to the telephone, from the telephone to chat rooms, etcetera.

Spatial computing, I've convinced myself, really represents the next really big leap in human history for our ability to communicate with each other.

And I would be very surprised if 100, 200 years from now, even 1,000 years from now, history books didn't talk about the transition to spatial as what happened after the internet.

You know, we had the computer revolution, then we had the internet revolution, then we had the social and mobile revolutions, and then finally, the spatial revolution.  And in a way, I think the spatial revolution represents the pinnacle of the internet arc.  It's the next to final destination.

After shared AR, the only thing that could be better would be a direct neural interface.  And when we're talking direct neural interfaces, maybe it's not even meaningful to talk about the internet or the metaverse or anything like that anymore.

So I think, in a very real sense, the transition to spatial computing that we're all lucky enough to live through is genuinely one of the most interesting things in human history - it's very likely to represent one of the largest economic upheavals and transformations in human history.

What is the market size of the Internet?  It's not an easy question to answer because the internet touches everything.  And within 20 years, spatial computing is going to touch everything, just like mobile touches everything.

If you said, oh, the mobile market is just how many phones are sold, then you're missing out that it's also all the apps that are being sold and all this SaaS that is being subscribed to and all of the services that are being provided on mobile phones.

The mobile industry is already so big that it's not meaningfully calculable and spatial computing is going to be that big too.

And that's why we're seeing every major tech company, Apple, Google, Microsoft, Snapchat, Facebook, all of them, all of them are investing heavily in spatial computing.

And since we have, I can see by the profile pictures here that we have a lot of people who are very familiar with the crypto world.

You know, over the last two years, there was a lot of conversation about hashtag metaverse, right?  But one of the things that I find very interesting is that the metaverse conversation was two overlapping conversations with a bit of an overlap.

One was Silicon Valley talking about the metaverse, which was very much focused on spatial computing and VR and AR, but also, you know, the decentralization movement, the crypto movement talking about metaverse, and they were mainly talking about digital ownership, interoperability, things like this.

And, of course, there was a section in the middle that said that the metaverse is both of these things. But it's also important to remember that for a lot of people, the metaverse was just one of these things.  Like, I don't know how important Mark Zuckerberg thinks it is with interoperable NFT digital ownership, for example. I don't know. But I know that he thinks spatial computing is very important.

I know that Meta has this business unit called the Reality Labs. I know that they bought Oculus. I know that they're spending many, many, many billions of dollars on spatial computing.

And we suspected that Apple was, and now we know because they released the Vision Pro. Not only did they release the Apple Vision Pro, but they told us that from this point on, Apple should be understood as a spatial computing company.

And Microsoft, of course, released the HoloLens.

Snapchat, a social media company in many of our imaginations, has come out and said that, actually, they're an augmented reality camera company. So even Snapchat is going from a social media company to a spatial computing company.

Tesla, I would argue, is a spatial computing company.

You may have heard people say that Tesla is not a car company; it's an AI company.  But even that, I think, is not correct because the thing that makes Tesla competitive as an AI company is the fact that they have millions of cameras driving around the world doing spatial computing and scene reconstruction and rebuilding the world to allow for Tesla's cars and in the future Tesla's robots to navigate the real world.

The competitive advantage that Tesla has is not necessarily its AI but the resources available to its AI, which are being made available by Tesla's spatial computing efforts.

And when we talk about Tesla's decision to include or not include a LiDAR in the car, this is a spatial computing decision.

So even though the mainstream media has not caught on quite yet on how big the spatial computing arms race is, the mainstream media only started talking about spatial computing after the Apple Vision Pro came out.

In the spatial computing industry and Silicon Valley, spatial computing has been a very big arms race for a very long time.

So to summarize the question, how do I envision spatial computing transforming our daily interactions?

I imagine it transforming the way we communicate with each other and machines at the same level of magnitude as the introduction of the internet. We could sit here all night and talk about everything that's going to change.

And you can see from how the world's largest tech companies are spending their money that this is the consensus: Spatial computing is the future of the internet.

Q3: There are a lot of possibilities - it is so early. You’ve already worked a long time on Auki. What does the future look like for the company?

Nils Pihl:  It's very fun working at Auki because we have been given the opportunity to build towards not only the future we believe in but specifically in the version of the future that we think is preferable to civilization.

Related Post: The Future of AUKI: In-Depth Discussions with Auki Labs CEO Nils Pihl and the Launch of the Posemesh

One of the things that we talk about a lot, and for those of you who have seen our seven-minute introduction to the posemesh video (embedded below), one of the things that scare us about spatial computing is how much of it is based on computer vision and how much of it is based on sending your camera feed to centralized providers.

Seven-minute posemesh introduction video refenced in AMA.
You know, how much of making AR glasses work is based on allowing the manufacturer of the AR glasses to look through your camera.

Q4: Could you elaborate on how Auki changes retail operations and the specific benefits it offers to staff, efficiency, and knowledge transfers?

Nils Pihl: Retail has been struggling with a couple of cost centers for a long time.  The two most expensive things with doing retail are your staff and your rent.  Those are the most expensive parts of doing retail.  And retail has been struggling a little bit with the competition from e-commerce.

Over the last two decades or so, e-commerce has been growing two to three times faster than retail has been growing. So retailers are facing many challenges.  And one of the challenges that we really, really wanted to hone in on is this issue of staff turnover and staff training.

To get a sense of the scale of the problem, a lot of retailers have annual staff turnover between 40 and 70 percent.

Walmart, for example, reported having an annual staff turnover of 44 percent, which is relatively low in retail but very high globally.  What it means that Walmart has a 44 percent staff turnover is that Walmart actually has over 900,000 employees first days every year - 900,000 employees first days at Walmart every year.

Related Post: Spatial Retail by the Numbers

And that means if you factor in the salary that Walmart pays for first-day salaries, we're talking over 100 million dollars.  Over 100 million dollars are spent every year by just the largest retailer in the world on first-day salaries.  And a lot of your first day is dedicated to training, learning what things need to be done, where they need to get done, and how they need to get done.  And the where part is one of the hardest things to teach people.

Recently, I've been interning at grocery stores to learn more myself about what is the day-to-day now.  Of course, I worked in grocery stores when I was a teenager and stuff.  But now, 20 years later, how has technology changed the workflow?

And the honest answer is technology has not impacted retail as much as it probably should have.  Some of the places where we pilot task management are handled on WhatsApp.

In some places where we pilot, task management is handled literally with pen and paper.  

And when you do things like click and collect, where you order your groceries online and then you go pick them up yourself, and they've already been bagged for you.  Someone in the store picked up that order for you.  And picking up that order takes time.  And if you're new, it takes an even longer time.  

What if even a first-day employee, even a first-hour employee, could know where every item in the store is?

So when you get a list of 40 items to add to a shopping cart, not only do you know where all those items are, but we can help you do route optimizations.  You'll walk the fastest path to pick up all of these items.  What kind of impact would that have?

Well, it turns out to have a huge impact.  Without that kind of help, a large e-commerce order can take something like 15 minutes to pick up.  And if you're paying $14 an hour, as Walmart is, 15 minutes, that's a quarter of that hour.  So it's a quarter of $14.  That's a few bucks.  And the margin for groceries in grocery retail is typically only between 1% and 3%.

So the difference between taking 15 minutes to pick up an order and 5 minutes to pick up an order has a very big impact on how much you get to keep of every shopping bag you sell.  

But also, of course, the less time you have to spend training. Someone is usually involved in training.  Person A is training Person B. And while Person A is training Person B, they're not performing other tasks.

So we realized that by allowing staff members to use spatial computing, to communicate with each other across distances, across time, to leave tasks and reminders and notes for each other in space, by allowing them to know where all the products are, by allowing them to see things like the planogram of the store as an AR overlay (see video embed below for example), that we can reduce the time that they spend on each task. We can reduce the training time required for new staff.

A quick intro to better understand the tech Auki Labs is deploying in the retail space.

But we can also, we believe, and we're trying to prove this now, actually reduce the level of stress and cognitive complexity for the staff themselves so that they won't burn out as easily during retail.

Everyone who has worked in retail for a long time will tell you it's easy to burn out working for retail.  And if we can use spatial computing technology, not just to help the employer but also the employee have a better experience, we're very excited about that.  

So we are piloting, first and foremost, tools to help staff do better.

We have a couple of pilots happening where we're also helping shoppers have a better experience.  But right now, our focus is really on how do we help the staff because that's going to help the employer save a lot of money, and it's going to prevent the staff from getting a lot of gray hairs. So that's what we're trying to do with Convergent.

Q5: Okay, you say focus first on the retailer and employees.  What do you think about the technology for the shopper, the consumer at the retail store - how do you see that?

Nils Pihl: The main issue with the shopper experience is, of course, that each retailer wants to control the brand experience of that. So it's not viable for us to come out with an Auki branded app for shoppers. That's not okay. Of course, someone like Walmart is going to want a Walmart app.

Now, we've made an SDK to make that possible. So we can support someone to make their own branded thing.  But something that is building on the SDK takes longer time than using an app we already made.

So for staff, we can provide a ready-made solution. But for shoppers, no one's going to want to buy a ready-made solution because everyone wants to have their own branding on it.  

And there are, of course, people that have their own apps, and they can integrate our technology into their apps, but it takes time.

A lot of people are also trying to get rid of their own apps and trying to do web apps instead of native applications.  And web spatial computing is something that's still very early.  And we don't have good support for web AR today this year.  So a lot of shopping experiences that have to run natively on the web is something we are not ready for yet.

We run inside of someone else's app, we are ready for that, but it takes a longer time.  

So to make sure that there's demand for our service and get traction early, we think it's strategically better to focus on the staff, where we can get a deal faster and there's less integration. And yeah, we can move the deal forward more quickly.

Q6: Developers often seek versatile tools.  How does the ConjureKit SDK empower them to create unique spatial computing applications?  What are the primary advantages it provides over other developer kits?

Nils Pihl: There are many great augmented reality SDKs out there. What's unique about the ConjureKit is our focus on creating shared AR experiences.

Shared AR experiences require multiple devices to agree on a coordinate system together. Whereas there are actually many, the majority of AR solutions out there either don't have that capability or don't have that as their focus. But this is our focus. And to make that possible, there are two main things that the posemesh provides.

The first thing that we provided was a real-time networking service that you can use for multiplayer, a decentralized real-time networking service that you can use for collaborative spatial computing and multiplayer.

We combined that with patent-pending calibration methods that would allow two devices to get into a shared coordinate system by only scanning each other's displays.  This was the thing that we invented in 2021, which made us the first company that could create a shared AR experience on the fly, ad hoc, in just a second. Whereas at the time, it took around a minute to create a shared experience in something like Pokemon.

Related Post: Introducing the ConjureKit

The second thing that we're bringing to the table is decentralized mapping services as well.  

The ability to put QR codes on your floor and host your own navmesh, and to do that in a way where that data is not stored on our servers. That data is stored on your servers if you want to. Of course, you can store it on our servers if you want, but if you want to store it on your own servers, you can.

If you want to do your own visual positioning connected to the posemesh, you can.

The ConjureKit basically brings you networking, calibration techniques, and persistent AR through domains.

On top of that, to make it a bit sweeter to use, we added a fourth thing.  

We made our own proprietary monocular 3D hand tracker so that you can do hand interactions the same way you can on headsets that have advanced depth reconstruction or phones that have LiDAR, but even on phones that don't have a LiDAR.

See the monocular 3D hand tracker in action in this video.

To be clear, only iPhone Pro phones have LiDAR today.  There's one Android phone with LiDAR out there, kind of.  LiDAR on phones is very rare.  Without LiDAR or stereoscopic reconstruction on headsets, you can't actually have hand interactions.

If you look at Niantic's latest AR pet, Peridot, it is very cute.  If you want to touch the AR pet, you touch it on the screen.  But when you play with our prototype AR pet, the Incos, you touch it in space because we have hand reconstruction.  We provide that as part of the SDK.

People who build on the ConjureKit SDK can create shared experiences and persistent experiences, and they can make use of our patent pending calibration techniques, and they can make use also of our monocular hand reconstruction.

Q7: The posemesh protocol is central to your innovations. Could you explain how it works and its significance in enabling collaborative and privacy-preserving spatial computing?

Nils Pihl:  A good way to understand the posemesh is, first, to understand how other people solve positioning.

So if we look at, for example, how Google solves greater than GPS positioning, they have two services, one called Spatial Anchors and one called the Spatial API.

Both of these are essentially the same kind of solution but for different circumstances.

Spatial Anchors are a way for you to create a 3D copy or 3D representation of what a small space looks like and store that on their cloud so that when you make applications, you can reference that copy and compare your camera feed to that copy to find out where you are in relation to that copy.

So this is a way to do positioning.  It's called visual positioning.  Niantic has its own visual positioning.  Snap has its own visual positioning.  Apple has its own visual positioning.  Basically, all of the major companies have their own visual positioning.

But the way that visual positioning works is essentially, you send your camera feed or data derived from your camera feed to these centralized providers.

And then math happens in their cloud, and they respond to you with where you are, which means that not only did they get to look through your camera, but they also got to do the calculation of where you are. So they know where you are and they know what you are looking at.

The posemesh tries to turn this upside down and make it privacy-preserving.

So with the posemesh, even if you want to do visual positioning in a posemesh way, what happens instead is that we provide data to the device so that the device gets to calculate its own position.

So we use the QR codes as an example because that's the method we use now when a device tries to connect to a domain.

Let's say that you have a domain that's hosted on your hardware.  It's not on our cloud. And an application, a posemesh application, shows up, and you've given permission to this posemesh application...

The posemesh application will ask, “Hey, can you tell me about all the QR codes you have and what their positions are relative to each other?”  

And then you choose if you want to answer.  And then, you know, by default, you'll say like, “Yeah, sure, I'll tell you.”

Now that I know that, I can do some calculations on my own end and figure out where I am.  But the only thing you would know is that I came and I asked for information about your QR codes.

You actually don't know which QR code I'm looking at.  

You don't necessarily know where I am within the space or how I'm moving within the space.  And neither does Auki Labs.  We also don't know.

The point of the posemesh is to build a positioning service that's a part of the Internet rather than part of some company.

The goal of the posemesh is to grow away from Auki Labs and become an independent, open-source, public utility that's part of the Internet.

Q8: How do the decentralized resources for hyperlocal routing (Hagall nodes) work? What advantages does Hagall offer in creating an immersive XR experience?

Nils Pihl: The Hagall network is the name of the networking part of the posemesh.

So the posemesh has a decentralized real-time networking server that you can use if you want to do collaborative spatial computing and multiplayer.

The point of the Hagall network is that collaborative spatial computing, not multiplayer necessarily, but collaborative spatial computing and multiplayer AR, requires lower latency and ping than any other mainstream application with even higher demands than online gaming.

And the reason why is that augmented reality is always rendering against reality, and reality moves at the speed of reality.

So what does that mean?

Let's say, for example, that you have a virtual object, and we have 30 milliseconds of ping, which is great for online gaming.  You know, if I'm playing a game of PUBG and I have 30 milliseconds of ping, I'm very happy. Thirty pings, good.

But 30 milliseconds of ping in a shared AR experience would mean that even though I see this virtual object when I lift the object, my hand moves first, and the object follows. So to prevent this from happening, you need to have really, really low latency.

Ideally, you need to have sub-eight millisecond or even sub-four millisecond latency so that you can calculate the position of things within the same frame if you're running at 60 or 120 Hertz.

And that's actually not that easy to accomplish on the internet today.

The reason is that if you're not talking directly peer to peer, which is only really practical if there are two of you, as soon as there are three of you talking peer to peer, it is not that practical anymore.  And not all devices are allowed to talk to each other peer to peer.

So if you are talking to each other over the internet, the ping you get is dependent on how far apart you are, not on a map, but how far apart you are on the internet.

Our theory with the Hagall network is, look, we think the future of AR multiplayer and collaborative spatial computers is not massively multiplayer.  People want small private sessions, you know, two to eight participants doing something together, needing to do things together.

And a real-time server that's only handling four people actually doesn't need to be a much faster computer than the computer I had in my room in the nineties.

So we built the Hagall network to be a really, really lightweight. A really, really lightweight networking server.  It is so lightweight that it can run even on a router:  We have some people that run it on routers.

The idea is there are a lot of small computers in the world, inside of our routers, inside of our smart TVs, inside of IOT devices that are just sitting around, not working at full capacity.

What if we allowed these devices to get an extra job on the side? To help these little real-time networking sessions and, in exchange for doing that, earn rewards, could we lower the latency by allowing you to connect to a hyper-local server out of - one out of potentially hundreds of thousands or even millions of servers - instead of choosing one of 50 AWS servers?

We haven't proven this, but it's a good hypothesis. And now, thanks to the community, we have over 300 Hagall nodes out in the world, meaning the Hagall network actually has better coverage than AWS.

The Hagall network has better coverage than AWS.

The next goal for the Hagall network is to reach a milestone we call cloud saturation, which means that the Hagall network is running literally on the three major clouds: Azure, Google Cloud, and AWS in every region.

And AWS, we're already doing that.  We're already running on every major or every availability region on AWS.  And we're getting close on Azure, and we're getting close on Google Cloud.  And when we have that, when Hagall is on every major cloud availability region, it stands to reason that that's the best ping the Web2 world can give you.

So from that milestone, we want to start comparing how much we can lower the ping from there once we start introducing more devices like smart TVs, routers, and spare computers that are not part of cloud server parks.

So hopefully, next year, we can really test and prove the hypothesis that the Hagall network can reduce latency for multiplayer and collaborative spatial computing.

We have tutorials on our YouTube (see the video embedded below) for how to set up a Hagall node if you want to experiment with it. We particularly want to encourage people not to run it on the cloud.

If you have some spare compute, something like a Raspberry Pi or a fancy router with a computer in it, run it on that.  We're very, very excited for people to do that.  Don't spin one up on AWS Stockholm.  We have enough Hagall nodes on AWS Stockholm.

And for the best results we advise running it 24/7, but that's for the best result for the network. And it's very easy to set up...

Contribute your spare compute to a privacy-preserving protocol: set up your own Hagall node.

Q9: What challenges do you anticipate in the widespread adoption of spatial computing, and how do you see Auki Labs addressing these challenges in the near future?

Nils Pihl: So, of course, for people to adopt spatial computing, they need a use case that makes sense.

There is a sense from many in the XR industry and mainstream commentary that once AR glasses are small enough, everyone will start using them.  I don't think that's the problem.

The problem today is people don't know why to use AR glasses or, “What am I going to do with AR?”And I think that's because, in a way, Pokemon Go ruined our imagination of what AR is for.

When you start thinking about AR as a way of communicating, seeing it as a language, you start understanding that the killer use case for AR is having shared information in space.

And if Auki were to enter the AR glasses race - which we hope to do one day if we have the resources to do it - our focus is not on creating cool 3D experiences that eat a lot of battery and make the glasses very big.

One of the ways to get the glasses smaller is to focus on a much purely linguistic use case and just put text labels in space.

But I also believe that for enterprises where there are costly problems out there, the headsets are small enough already.  They don't need to get smaller.

So the hypothesis we came out with -  when we raised money in 2021 - we went out and said, “We think the XR industry is wrong. Everyone is saying that it's about making smaller AR glasses.  We don't think that's the issue.  We don't think it's a battery problem. We don't think it's a processor problem. We don't think it's a size problem...”

We think that the problem is that you can't have shared experiences. And if you can't have shared experiences, there are no good use cases.

So, the challenge for us is to prove that that's right.

We are on our way to proving that by putting out use cases that make people adopt even handheld AR, not even waiting for the glasses but using augmented reality already in a handheld format.

And, of course, there will be a challenge that we, as a smaller company with fewer resources, have to compete with the likes of Apple, Tesla, and Google.  That's a challenge.

The opportunity is also that if you win this race, you can be every bit as big as a Tesla, Google, or Apple.  That's what we want to do.

One of the things that we really, really believe is that spatial computing represents a transition as big as the transition from the telephone to the Internet.

The next trillion-dollar company is likely to be a spatial computing company. We think it's likely to be built here in Hong Kong, and we're trying to make sure that it's going to be us. That's the goal.

The goal is not to sell Auki Labs to Apple.  The goal is to get big enough that we can dream about buying Apple.  That's the goal. I unironically believe that the spatial computing opportunity is literally bigger than the entire crypto industry.  

One way to demonstrate that actually, not everyone knows this, but this will, since we have a crypto audience here today, this may shock you; but if you took all of the engineers in crypto - all of them - you would still have fewer engineers than work at Facebook.

There are more people, more engineers, working at Facebook than all of crypto combined.

If you take all of the spatial computing from Apple, Microsoft, Facebook, etcetera, there are already more engineers working in spatial computing than there are in all of crypto combined.

So, as much as we're learning from the crypto community and enjoying learning about how decentralization works and can be made to work and what the challenges are - ultimately, we believe that spatial computing is a bigger industry than the entire crypto industry. We're quite adamant on that.

Q10:  How can developers and enthusiasts engage with Auki Labs to contribute to the evolution of spatial computing and its applications?

Nils Pihl: One of the things that we are the most excited about, of course, is if you build something on the SDK.

We have plenty of open-source code that you can use to get started. We have a DevRel team in our Discord, and they can help you build applications.

We also have GPTs that can provide code examples and help guide you through building.  

Building things that showcase your imagination and how you want to use shared augmented reality.  That's one of the absolute best things you can do to help this project along.  

Big thanks to Berenice, who has done this.

The way we got in touch with Tracy in a very real sense is she built some stuff on our SDK; we thought it was so awesome that we hired her.

Related Post: Tracy Liu - Auki Labs Innovative Force in Tech and Developer Relations

So if you dream of having a job at Auki Labs, build something on the SDK and send it to us.

Of course, as I mentioned earlier, next year, we're going to push toward cloud saturation on the Hagall network as well.  And when we do that, there will be a coordinated effort to make sure that we cover Google Cloud, that we cover Azure, that we cover AWS.

Also, when we put out videos and blog posts and stuff, it's very helpful if you share them.  Share them not only inside the crypto community but also with retailers. Share it with your mom.  Share it with real people in the real world because spatial computing is going to touch everyone.

Q11: As we traverse the uncharted territories of spatial computing, envisioning the fusion of digital domains with our physical world, could this technological leap be a portal to new realms of spiritual exploration and understanding?

Nils Pihl: Language is to help different minds perceive and understand the world the same way.

The reason we keep saying manifest the imagination and knowledge in the minds of others is because this is what language is for.  This is what augmented reality is for.

And I believe that human beings have a deep need and desire.  The universe has a deep need and desire to understand itself and to understand each other.

The spiritual mission of Auki Labs is not to solve positioning. It's to allow for shared understanding.

By improving the language stack, by improving the technology that allows us to communicate, we can raise the ceiling for how well our shared understanding can become, how good our information symmetry can become, how good our intersubjectivity can become, how good our inter-cognitive reasoning can become.

And this assemblage of minds, allowing people to come together and form a whole that is greater than the sum of its parts, is, in a very real sense, I think, the apotheosis of the human psyche.

We are coming together to build civilizations, inter-cognitive civilizations, and to be part of something bigger than ourselves.

There's a Jesuit priest in the early 1900s, actually, who was reflecting on language and technology and how it was making us have an easier time communicating with each other.

And he predicted a future.

I forget the name he used for it, but he predicted a future where, essentially, humanity almost gets mind-reading capability with each other because we're so good at communicating with each other that we are of one mind.

Isn't that kind of like having built God from the beginning?  Is language humanity's journey to manifest God from scratch?  How do you build it from scratch?

A good clue is probably to start by building out the language stack.

What Auki wants to do is build out the language stack, our capability to communicate, and our capability to understand each other so that humanity can become something greater than mere individuals.

We can collaborate, we can be a civilization, we can have intersubjectivity, we can have inter-cognitive reasoning.

Q12: Where do big tech and blockchain potentially either collide or conflict when it comes to spatial computing?

Nils Pihl: We have chosen blockchain as a part of the stack for the posemesh to help handle rewards and reputations within the protocol.

So blockchain is a component of the posemesh. But I want to be very clear:

The posemesh is bigger than a blockchain, and the posemesh is not a blockchain. It just uses a blockchain for a part of it. The reason we're using a blockchain for a part of it is that we are hoping that spatial computing will not belong to big tech.

We are hoping that the posemesh will be able to grow into something independent that is not owned by Auki Labs but owned by the people, and it's just a part of the Internet in general.  We're hoping to build a public utility.

Now, in theory, there are probably many different ways that you can do that.

In practice, we know that the big tech companies, with their equity shareholder structure and fiscal responsibilities to maximize returns, etc., don't have an interest in positioning as a public utility in that sense.

Of course, Google is hoping to be the positioning back end of the world.  Of course, they're hoping to be able to look through your glasses, etc.

So one of the things that doesn't come automatically with blockchain, but that blockchain can assist in, is the struggle between decentralization and centralization.

There are a lot of centralized things in blockchain.  In many ways, the blockchain culture is more centralized than Web2.  Most Web3 projects are built on Alchemy and Infura.  These are two providers that are running a huge percentage of all Web3 projects.

There's a lot of centralization in the blockchain movement too.  But blockchain at least provides a path towards decentralization.  And we are thankful that blockchain exists to open this path to us.  But I wouldn't characterize it as blockchain versus big tech.  It's decentralization versus big tech.

Whether or not blockchain is going to be a part of the decentralization versus centralization debate depends a lot on what the blockchain community does with it.

Over the last two years, blockchain has become very centralized and is losing a little bit of its cultural legitimacy in the decentralization versus centralization debate.

Of course, there are still great decentralization thinkers out there, people like Balaji, etc., that push this.  But to some extent, the voices of people like Balaji are being drowned out by people like BitBoy.  BitBoy is not about decentralization. It's about numbers going up.

So we'll see what role the blockchain community and the blockchain culture will play versus big tech.

But the posemesh and the posemesh community will definitely be a counterforce to big tech and centralization.

And we hope that big parts of the blockchain community and the crypto community will join us in this fight for decentralization.

Q13: So what technology can AR tech transform into in the next decade, taking into account the expansion of AI and the possible future connection of our brains with a computer, assuming that our brains will be able to process so much information in real time?

Nils Pihl: Fun fact, I actually had the opportunity to work on a neural interface, like connecting software to a brain, a little bit more than 10 years ago.

I had a small role in a project focused on proving that primate brains are plastic enough that they can learn how to use new limbs.

Lili Cheng, Corporate Vice President at Microsoft, recently said essentially that spatial computing is the eyes and ears of AI.

What does that mean, that it's the eyes and ears of AI?

Well, today, ChatGPT lives on the internet. ChatGPT doesn't know anything about the physical world that we didn't tell the internet.

If you want the kind of future that Elon wants with the Optimus robots, et cetera, you need spatial computing to allow robots and AI to inhabit and understand the world that we are in.

Optimus humanoid robot.

And AR, of course, we've already talked a lot about how augmented reality is the future of language.  But it's also, as part of spatial computing, the bridge from the digital world into the physical world.

Like, if I can be a little bit cyberdelic for a while, over the last few decades, humans have visited digital worlds, and that's been great. We've gone into World of Warcraft, we've gone into Decentraland, a thousand people went into Decentraland, whatever. We've gone into virtual worlds, and we've experienced virtual content.  

But what spatial computing allows now is for digital things to come into our world.  This is turning the whole thing upside down.  Instead of humans going into the computers, the computers are coming out to the humans.  And that's a big moment in capital H, History.

With my experience, both in spatial computing and in big data, and the work I've done with neural interfaces, I know that the problems of interoperable collaborative spatial computing overlap a great deal with interoperable collaborative neural interfaces.

I imagine Auki - if Auki survives - to be a formidable competitor for companies like Neuralink 20 years from now.

Because of the software stack that allows for collaborative spatial computing - I have a strong hypothesis - is the software stack that will also allow for inter-brain communication with brain chips.

Wrapping Up

Nils Pihl: Thank you for coming here for this AMA. I appreciate sharing the space with you and having this intersubjective moment.  I'll see you guys around the Discord.

Kevin | Moderator: Nils, thank you for this very interesting AMA.

About Auki Labs

Auki Labs is at the forefront of spatial computing, pioneering the convergence of the digital and physical to give people and their devices a shared understanding of space for seamless collaboration.

With a focus on user-centric design and privacy, Auki Labs empowers industries and individuals to embrace the transformative potential of spatial computing, enhancing productivity, engagement, and human connection.

Auki Labs is building the posemesh, a decentralized spatial computing protocol for AR, the metaverse, and smart cities.

AukiLabs.com | Twitter | Discord | LinkedIn | Medium | YouTube

About The Posemesh Foundation

The posemesh is an open-source protocol that powers a decentralized, blockchain-based spatial computing network.

The posemesh is designed for a future where spatial computing is both collaborative and privacy-preserving. It limits the surveillance capabilities of any organization and encourages sovereign ownership of private space maps.

The decentralization also offers a competitive advantage, especially in shared AR sessions where low latency is crucial. The posemesh is the next step in the decentralization movement, responding to the growing power of big tech.

The Posemesh Foundation has tasked Auki Labs with developing the software infrastructure of the posemesh.

Twitter | Discord | Medium | Updates | YouTube  | Telegram | Posemesh.org

Related posts

Contact Us

Auki Labs AG, c/o WADSACK Zug AG
Bahnhofstrasse 7
6300 Zug
Switzerland

contact@aukilabs.com

Privacy Policy