Last year I wrote a piece titled The Free Internet, where I discussed the importance of internet platforms that are truly open, decentralized, and controlled by the communities and people that use them. It was written with a focus on social media and online communication, but lately there has been another facet of our digital lives seeing much discussion: artificial intelligence.
Much has been written on AI already, by people far more knowledgeable than myself, and most of the focus has been on a few key questions. How can we adapt job markets to an increasing degree of automation? How do we handle consent and attribution of content used for machine learning? What are the consequences of large language models such as ChatGPT being broadly used in academia, or being used to influence political opinion at a mass scale?
Rather than reiterate these particular questions, however, I instead want to take a step back and look at the role of AI as a paradigm shift. I had initially started this article by writing a bit about what AI actually is, but I came to realize that this is actually the wrong way to go about it.
It’s not really about AI
I could write a few paragraphs detailing the history of AI, the classification of narrow versus general intelligence, the philosophical consequences of challenging how we think about sentience, how large language models such as GPT-3 work, and so on. And don’t get me wrong – there are many important discussions to be had around the specifics of AI, how we interact with AI, and its role in society.1
However, in only doing this we’re ignoring a more general problem, one that is not unique to AI but rather applies to any kind of paradigm shift, one we have seen again and again: Embracing new platforms without giving thought as to who operate them, for what purpose, and in which manner.
If this sounds familiar, this is the exact same argument I bring forth in many of my previous articles, and it is even more important here. To understand why, let’s look back at the world a couple decades ago, right at the advent of social media.
Boiling the frog
Needless to say, during this time the internet was far less omnipresent. The way we communicated with friends, followed the news, did our work, and ran our daily lives was far more spread out and diversified – not always in good ways, but certainly in ways different to today.
At this point, companies such as Google, Facebook, Twitter, and Amazon all started out as idealistic efforts to change the world for the better. Whether this was the actual intent can be debated, but nonetheless the greater public did get the idea that these companies were ran by idealists wanting to improve everybody’s lives through technology.
As these platforms became more and more useful, many of us – me included! – were quick to integrate them into our lives without much thought. It was easy to see the advantages: connect with long-lost friends, read news better tailored to your interests, be more productive through better email and work-related services, buy all your things online, and so on. Meanwhile the most important question remained unspoken:
What happens when we don’t want to use these services anymore?
Without giving the matter much thought, we became so reliant on these non-free platforms that it’s now exceedingly difficult to live our daily lives without them. If you do not wish to use Google services, you are now often unable to do your work. If you do not wish to use Facebook services (including Instagram and Whatsapp), you are cut off from people close to you. If you do not wish to use Discord, you are unable to communicate with most gaming and tech communities on the web. If you do not wish to use Amazon services, it is becoming increasingly difficult to buy things that you would previously have found locally, or watch the same movies and shows as your peers – and if you had previously purchased any digital books, you can say goodbye to those if you ever close your Amazon account.
The illusion of choice
I want to stress a point here – there is not necessarily anything wrong with these companies existing, or that they have certain practices that you may disagree with. Amazon is fully in their right to revoke Kindle books or form exclusivity agreements for movies and TV shows. Online communities are fully in their right to flock to Discord for communication. Google is entirely in their right to read through your email for targeted advertising. Sure, the transparency of these practices could be better, but it’s not what these companies do that is the real problem.2
The real problem is that as a society we have ended up treating these huge private corporations as public services, and to such a degree that is making it increasingly difficult to live without them.
Smartphones are a good example – both Google and Apple are companies that have faced a significant amount of criticism, and it’s entirely reasonable to not want to support either. Even if you did take no issue with the companies themselves, both Android and iOS are operating systems that are strongly locked down and by their very design antithetical to user freedom.
However, it’s difficult today to say “I don’t want to support Google or Apple, so I won’t use an Android or iOS phone”. Some alternatives do exist, including ones that are truly free, but in practice you have to sacrifice a huge amount of functionality that people and organizations around you expect you to have. And it’s that last part that’s important – being fine with sacrificing functionality in return for not supporting companies you disagree with is perfectly reasonable, but when society – even governments! – expects you to use these non-free platforms, this sacrifice becomes something that few people can reasonably do.
It’s a fair assumption that had we better understood this at the time, we might have gravitated towards less centralized and more community-driven platforms as we made the internet a greater part of our lives. Understandably, we did not have this foresight, and the greatest challenge getting people to change today is that they are already deeply entrenched in the services and platforms they currently use.
Now looking towards the future, this is a second chance. We may not be able to predict exactly where the next paradigm shift will lead us, but we can learn from our mistakes.
The danger of ubiquity
With all this in mind, when discussing AI from a broader standpoint, all we need is a single assumption:
AI will end up becoming indispensable to us.
That’s it. If we agree that this statement is at least likely enough to warrant consideration (or is already true!), then the largest and most relevant question appears far more visible than if going into the details of AI specifically. The question is – who creates, operates and provides the AI services?
Let’s actually delve into this for a moment!
The largest3 actor in consumer AI-centric services today is undoubtedly OpenAI, which is behind AI models such as ChatGPT and DALL-E. OpenAI set out in the same manner as most other big tech corporations did – with a clear and explicit goal of changing the world for the better.
Admirable as this sounds – and specific, too, their website spares no effort detailing their humanitarian approach to AI – it might be tempting to feel comforted by this and the fact that the organization describes itself as a research lab. Unfortunately, while OpenAI should be commended for emphasizing these matters, this does not paint the entire picture.
In reality, much of this is less relevant than it seems. Don’t get me wrong, I do believe that most people working at OpenAI (and other places like it) genuinely want to do good – but the entire starting premise, just like for the equally idealistic social media platforms a decade ago, is inherently incompatible with a truly open platform.
To start, OpenAI is not actually a non-profit enterprise. In 2019 they transitioned into a for-profit model, and the bulk of their funding comes from venture capitalists. This does not inherently mean they cannot do good, but it bears worth mentioning since it is easy to miss when reading their website.
More importantly still, despite making a stand for openness (it’s in their name!) they are still a singular entity that operates in a private manner, and here is where the actual issue lies.
When is open not open?
The Free Software Foundation famously makes a point to use the term free software over open source. Their argument goes that when they talk of “free software”, they describe an ideological movement – the idea that software should respect our freedoms. In contrast, when people talk of “open source”, it is purely a functional description (anyone can view the source code) and intentionally avoids making any sort of ideological point.
This is a meaningful distinction to make. There are many corporate projects today that are labeled as “open source” – usually enthusiastically by the corporations behind them – and from a literal standpoint, sure, anyone can look at their source code. In practice, however, the actual tools, processes and resources to develop these platforms are heavily locked down. Android and Chrome are good examples of this – it is very difficult to do meaningful development of an Android or Chromium derivative without using Google-developed tools and systems.
Likewise, Github is often hailed as a bastion of open source, but the platform itself is completely proprietary and users have no control over how it functions or operates.
When we then look at AI models, this situation is even more accentuated. In fact, let’s look at a few of the current largest consumer-oriented AI services from this point of view:
- ChatGPT/GPT-3, as well as DALL-E, are models developed, trained, and provided entirely by OpenAI. Although OpenAI claims to be “developer friendly” and provide these platforms free of charge in an easy-to-obtain manner, it nonetheless all goes through OpenAI and is entirely in the control of this single organization.
- Microsoft’s Bing Chat and most other high profile platforms run on GPT-3 or GPT-4 behind the scenes and therefore fall under the exact same category as the above.
- Likewise, countless new “AI-powered” services are using GPT-3, making them entirely dependent on OpenAI.
- Google’s Bard is, needless to say, controlled and operated entirely by Google.
- Midjourney, a popular platform for creating AI art, is a proprietary product run by a single organization by the same name. Furthermore, for consumers Midjourney is only accessible through Discord, a monolithic platform with its own share of issues.
An obvious pattern emerges here, and it’s not without good reason. Simply put, these AI technologies are inherently complex, require a vast amount of resources to both develop, train and operate, and without the backing of a larger corporation or vast sources of funding, are incredibly hard to develop. Some attempts have been made to create truly open AI platforms, but significant technological obstacles still remain.
Once again I want to stress that the problem isn’t that these corporations and services exist – but rather that we are far too quick to integrate these platforms into our lives in a way that makes them indispensable to us, before considering if they are truly platforms we want to support.
A truly open AI
In contrast to these examples above, what would a truly open AI model look like? At the very least, it would fill these criteria:
- The model would be completely open for anyone to contribute to, study, develop and copy. This includes both the model itself and the data used to train it.
- Anyone would be able to host and run services based on the model, given that they have the resources to do so. Charging money for doing so would be fine, since these practical logistical needs can be hugely expensive, as long as the model itself remains open.
- The model should be fully independent of running the model – that is to say, you would never lock yourself into a specific organization or company but could freely move between providers since they all essentially do the same thing. Or in other words, the AI equivalent of email or web hosting.
- Frameworks would be in place to ensure the intent behind the model follows a consensus of ethics, subject to constant re-evaluation.
None of these are my ideas and concepts, nor is this a distant theoretical concept. There are many efforts around the world to develop truly open AI platforms.
However, these kind of ideologically driven movements will always be slower and more difficult to make succeed than private business ventures. Corporations such as Microsoft and Google are now pushing AI services at a staggering speed, and it’s easy to start incorporating all of this into our lives since it’s here already, free of charge, and most importantly convenient. History rhymes, and this is the same reason that today we see Twitter used over Mastodon, Discord over Matrix, and Reddit over internet forums. It’s easier, yes, but may lead us down a path where we eventually have no choice but to use services that we cannot control.
To wrap all this up – it’s not about AI, as much as it is about the rising ubiquity of AI. There’s more to the conversation than the nature of AI itself, and we need to consider who provides AI services as they continue to integrate further into our society – until we have built truly open and communally run platforms, it is mindful to take a cautious approach. It is important to consider our actions carefully as we put even more of our lives into the hands of actors that may not have our best interests at heart.
This doesn’t mean staying away from AI services and platforms entirely – for most people it would be impossible to do so, and depending on your definition of AI, most services we use today already involve AI to some extent. But we do have choices in how we approach these services and how dependent we make ourselves on them, both individually and as a society.
It is worth working together towards implementations of these groundbreaking new technologies that are truly open, able to be run by and improved on by everyone, fully transparent in their design, and owned by no one. We saw the dangers of the last paradigm shift too late – but this time we have the opportunity to actually move slow, consider the big picture, and make sure AI technologies are in the hands of the people who use them.
1 If you are interested in reading more about this, I highly recommend The Big Nine by Amy Webb – a great overview of the history of AI and its role shaping society in coming generations.
2 Obviously, it remains a problem that these corporations engage in environmental destruction, human rights violations, and similar actions – but this is outside the scope of this particular argument.
3 There are many entities larger than OpenAI by many metrics. Here I refer more narrowly to the current zeitgeist and the technologies that are being rolled out more publicly at the moment.