Insights“Generative artificial intelligence is going to change what it means to be human”

“Generative artificial intelligence is going to change what it means to be human”

Generative artificial intelligence is set to disrupt more than just how we work: notions such as authorship and creativity might have to be rethought in the dawn of AI-generated content, poses GenAI expert Nina Schick.

Photo: Nina Schick, Generative AI expert consultant. Credit: Courtesy of Nina Schick.

By Elena Astorga

Since OpenAI kicked off the generative AI craze with the launch of ChatGPT, more and more technology companies have joined the race. OpenAI has already released a more advanced conversational AI system, GPT-4, and its DALL-E 2 image generation tool is already making its mark in the film industry. Google jumped on the track early with Bard, although it only made it to Europe last week. Ernie Bot, from Chinese giant Baidu, stumbled badly. The latest to enter the competition has been Meta with LLaMA 2, although there are signs that Apple will not be far behind. Aside from the major players, startups such as Stability AI or Midjourney are also gaining ground.

While big tech competes for a juicy market share, other public and private sectors regard generative AI more warily. Since it became popular through deepfakes, it has been at the forefront of a breath of concerns ranging from misinformation to sexual violence. Recently, GenAI was at least partly to blame for the breakdown of negotiations between actors' guild SAG-AFTRA and the Alliance of Motion Picture and Television Producers (AMTPT), after the Black Mirror-esque proposal of Hollywood studios to create AI doubles of actors, the likeness of whom they would own and control "for the eternity".

The capabilities of this technology are so vast that it seems there will be no industry left unimpacted, no information flow unscathed, no supposed strictly human capability unchallenged. For GenAI expert, author and founder of advisory firm Tamang Ventures Nina Schick, the question goes beyond practical applications and into existentialism: "It's a technology that's going to change, I think, what it means to be human," she muses.

Generative artificial intelligence (specifically, AI image generators) was one of MIT Technology Review's 10 Breakthrough Technologies for 2023. What makes this technology so relevant, and what advancement has it meant over other AI applications?

Traditional AI is more about labeling or categorizing data, but generative AI can create new data. In the early days, people saw it as a new way to generate visual media, but over the past few years it's become abundantly clear that it's much more than that. It can be seen as almost a combustion engine, a superpower, for the creation of everything that we had assumed was unique to human creativity and intelligence.

It's a breakthrough technology in 2023 because now it's mature enough to be applied in enterprise for efficiency, creation, insights... Traditionally, we've been brought up to think that human intelligence and creativity are something that technology can't automate or augment, kind of like a bastion for humans only. But what we're starting to see is that AI can definitely be used as an enhancing tool in all kinds of aspects, from creating AI-generated personalized visual content for entertainment to assisting in industries that traditionally have equated a lot of skills and years of training, like law, accountancy or coding.

In my view, this is probably one of the most profound technological revolutions that has happened in the history of mankind, and I think that it's going to unfold very quickly. One reason for that is that big tech companies have understood that this is a key technology, and over the past six months since ChatGPT came out they've pivoted to make GenAI a core part of their strategy. When a company like Microsoft integrates open AI as a generative tool into their suite of software—the most popular operating system in the world, used by hundreds of millions of people—you see an acceleration of the adoption of the technology, because it's being implanted into the existing digital and physical infrastructure of the Internet.

Recently, at the Digital Enterprise Show 2023 in Malaga, you stated that in a few years employing generative AI in the creation of digital content will be commonplace. How will it be necessary, in this scenario, to rethink the concept of authorship or authenticity? What tools could be put in place to verify it?

I have said that 90% of online content will be generated by AI by 2025. This figure represents my view that this is the last era of the Internet where most of the content and the information we see online doesn't have some layer of AI creation to it. We've already started to see that over the past few years with things like Instagram or TikTok filters, but now, with these generative AI capabilities reaching maturity and being deployed into almost every enterprise use case across industries, it’s my view that absolutely the majority of information content we see online is going to have some synthetic nature to it.

So determining the authenticity and the origin of content is probably an existential question when it comes to information integrity. Initially, with the appearance of deepfakes, which I call the first viral form of generative AI, the focus was on detecting content created by AI. However, it's quite a problematic approach, because there's no one-size-fits-all detector, and AI generators will always evolve to beat them. And perhaps more importantly, if you agree with my assertion that the majority of content and information will have some degree of AI involved in its creation, detection is not going to be enough. Another approach which I think is more promising is the idea of transparency and content provenance. Rather than detecting it's about revealing, embedding into the DNA of any information or content the origins where it came from. And it's more than a watermark: one of the companies that I advise uses PKI [public key infrastructure], which is kind of a cryptographic hash, so you can always check if it was made by AI or who owns it.

But people will actually need to see those content credentials, so we must build into the architecture of the Internet the infrastructure to do it. It'ss now being developed as an open standard by a nonprofit public body known as the C2PA, of which hugely influential companies like Microsoft and Adobe are part. And you also need to educate society about AI and digital literacy, so you have both technical and societal solutions, and everything needs to work together in conjunction.

At that same event, you pointed out that generative AI was able to take off in 2022 by drawing on the entire Internet for its training. However, this has given rise to criticism and concerns regarding the copyright of the authors of the original content fed into the AI. Where do these rights stand in the era of ChatGPT and DALL-E? Will we have to change or expand our notion of what art and creativity are to factor in GenAI?

I think we will, because you have to conceive of it as an entirely new medium. Just like when photography was invented and landscape painters were worried people would noy buy their paintings anymore now that people could "just click a button" and create a picture of a landscape. But that doesn't mean that everybody who uses GenAI can be creative or artistic, it's just a new tool. I think that we have to shift our conception of AI as an autonomous agent stealing from human creativity, because the reality is that a lot of creative people are already using GenAI as a way to enhance their creative genius.

As to artists legitimately feeling like their work has been stolen, we're already seeing the first class-action lawsuits against some of the generative AI companies who they claim took their work without consent and put it in the training data, therefore, everything that Dall-E or Stable Diffusion have made is a copyright infringement. I actually don't think that they're going to stand, because it's not the way that diffusion models work, you can't trace back what specific images were used to create AI content. So there's a bigger point here: what is the new model for compensation for artists and creators whose work, whose inspiration is used in AI training datasets?

Years ago, you already pointed out in your opinion piece Don't underestimate the cheapfake in MIT Technology Review that manipulation and misinformation did not need the technical perfection of generative AI and deepfakes to wreak havoc on the political landscape.  What is at risk if we become unable to tell apart what’s true and what’s fake? What measures can be taken to combat visual disinformation?

It's a serious, profound kind of philosophical concern. My background is in geopolitics, and the consistent thread in my career was how technology is emerging as this force shaping macro geopolitics and influencing the lives of billions of individual citizens. Over the past decade, I'd already seen the corrosion of the information ecosystem online, even before AI-generated content really came into the play. And it was having some heinous consequences, for instance, disinformation that spread on Facebook in Myanmar was part of the reason why we saw this ethnic cleansing campaign against the Rohingya.

Mis and disinformation are an age-old phenomenon, but because of technology, how fast and far information can travel and the impact it can have has profoundly changed in the last 30 years. And now, we inject into that trend the ability for people to make any content with artificial intelligence and to scale it. How profound could the impact of AI's ability to clone people's biometrics be in politics or to scam people? Because as the technology is getting better, less and less training data is needed. Already there's companies out there who say they can synthesize voices, with three seconds of audio, which means that anybody can be synthesized.

But it's not only that: now we know AI can synthesize and create anything, it makes it easier to deny that anything is real. The corrosion of the integrity of information itself is the kind of really dangerous point.

Can we avoid the misuse of this technology without forgoing its potential benefits?

I've looked at it from both sides: I initially came at it from the risk perspective, given my background in information warfare and disinformation. And then, over the years, I kind of pivoted to working with some of the startups who are building the technology and actually understanding that misinformation is only a part of the story. People will weaponize and use this technology in bad ways, but there's so much possibility for knowledge generation for humanity as well. What will generative AI unlock in terms of discoveries that could change the trajectory of humanity? How could it be applied in science and in medicine? We're already starting to see that it can uncover new proteins to power drug discovery or help fight climate change, for instance, by developing enzymes to eat up plastics in the oceans.

There's no putting the genie back in the bottle. So we need to take this whole-of-society approach because no single institution, no single state or single civil society can deal with the scale and pace of the change that's ahead. It's going to be an adjustment for all of society and governments must work really closely with industry, which is where the technology is being developed, because it needs to be regulated, transparent, and developed with responsible and ethical AI at its core. Although there's a lot of work to do, because it's just such a huge question—how do you mitigate the risks while seizing upon the opportunities?—, it's been encouraging to see that from the very onset of the revolution that was one of the core issues identified and this collaboration between governments and private industry is happening.