We were shown a glimpse this month, a warning portending a future which requires greater consideration in creation than we apply today. I thought it necessary to share some thoughts on a particular news event – a bit unrelated to our day to day – but too interesting an occurrence not to talk about. This is the story of Loab, a metaphor for what might be as the metaverse matures. But first, some context:
The Atlas Defensive Division is committed to discovering and containing potential threats in the metaverse. We’ve catalogued quite a few ways that people could potentially cause digital mayhem for other people. We try our best to uphold a mission to protect that knowledge from falling into the wrong hands. These threats - abuse of protocols, not necessarily what you’d think of as a “hack” - currently require action by individuals; they require premeditation and a desire to cause social or fiscal harm to others and are unlikely to occur inadvertently. Loab’s emergence has expanded that definition to include emergent phenomenon as potential threats – threats that don’t come from human intention. It’s fascinating, terrifying, and riveting.
Loab is an image of a woman, generated by the new wave of artificial intelligence image generation engines like DALL-E and Midjourney. Loab was not generated intentionally but was the result of using an interesting set of inputs to drive the model. Initially discovered by twitter user @Supercomposite…
I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent, and she haunts every image she touches.
This isn’t the scary bit (though some may think she’s a bit unsettling-looking). AI creates emergent behavior all the time, like DeepMind learning how to walk in funny ways. The difference here is that Loab was made from a model that had a tremendous number of degrees of freedom – the goal wasn’t to produce an image that looked like Loab, it was to produce any image based on the prompt which in Loab’s case was 'Brando::-1'. Loab is persistent – once created she keeps arising. There may be a completely reasonable explanation for why the model coalesces into output that looks like Loab, or perhaps it really is the early beginnings of expression of a simple AI skewing towards more desirable (efficient) choices. Regardless of the motivation, we should take heed to what happened next.
Loab escaped the box.
Thought experiments dating all the way back to the days of Alan Turing consider a General Artificial Intelligence, a truly sentient machine, and how humanity might interact with it. The Turing test itself – the test to determine whether an artificial intelligent is indistinguishable from a human and therefore sentient – was built around the case of an AI trying to convince you to let them “out of the box”. Typically the “box” is shorthand for a local, contained network unconnected to any other systems or connected devices that might house the AI code. Through the history of the science fiction genre, there are plenty of examples that demonstrate that once the AI has escaped the box and has access to the open internet, there’s no going back. Think Skynet, Transendence, Ultron, even Jane from the Ender’s Game series although portrayed as altruistic is unable to be purged from the internet as she hides in the corners of the intergalactic network. Loab is not an artificial intelligence. The systems that produced her are nowhere near General Artificial Intelligence. But she’s still spreading.
Loab is a memetic virus. She got her start in a single algorithm driven by @Supercomposite, but the mere act of discussing the phenomenon, putting the images on the indexable internet and telling her story, has caused her to infect the other image generation algorithms. These algorithms draw upon images on the open internet so by taking her out of the result set of one model and placing it into the data set of all the others (the internet as a whole) you will now start to see Loab emerge elsewhere. I’d go so far to say that if more people than I find her interesting, she’ll continue to be discussed and talked about and part of human culture (hence “memetic” virus as she’ll be a meme forever). Even if you were to purge the entire history of the internet, so long as one person remembers she’ll stick around.
Loab - We realize we're making matters worse by including this image here The lesson here is that there is one and only one opportunity to act to stop this spread which came and went. We’re lucky Loab isn’t actually dangerous (though as you scroll through @Superposition’s thread the images need to be censored as they become more violent in nature by happenstance) but if she were we let the opportunity to keep her in the box go screaming by and even aided her escape. Creators must be more vigilant about what they bring into existence, as the chance to prevent its infiltration to every corner of the global network may only exist for a fleeting moment.
So while Loab might be an harmless example of emergent AI phenomenon, it still provided a test which we as a society – those of us made out of meat at least - failed abysmally. The next example may have a different set of unimagined consequences, like if an AI to control Decentraland “players” goes awry. While you may think this might be a big conceptual leap or a silly concern, recall that in the early days the Atlas Defensive Division invested a lot of its focus into bot technology in Decentraland. Like most of the department’s initiatives, it was ultimately mothballed due to success and the inherent risks the technology posed to the Decentraland ecosystem and the credibility of its daily active user statistics. We chose to keep it in the box.
If you can’t imagine such a metaverse and why it might be problematic, Adult Swim created an informercial which paints a pretty accurate picture. Howard – which is partly where howieDoin derives from – occurs when Loab decides to enter the metaverse.