Skip to content
Join our Newsletter

Opinion: Artificial intelligence is a Pandora’s box—we should kill it now

'Right out of the gate, you can consider me a hater'
gettyimages-108177661
Will humanity's curiosity over artificial intelligence doom it forever like so many ancient curses?

“Thou shalt not make a machine in the likeness of a human mind.”

That is a line from the currently very popular Dune sci-fi universe by Frank Herbert. Within that story, it’s a verse from a fictional religious text created in the millennia after humanity overthrew the complex computers oppressing it.

The gist of the verse is there can be no programming that can perform a task a human mind is capable of doing—be it mathematical equations, logical assessments, wayfinding or decision-making. It is derived from fear of artificial intelligence taking away human agency.

Dune is, of course, science fiction, but artificial intelligence (AI) is the latest buzzword signifying where technology is leading us today, with the term used everywhere as the latest disruptor, the newest opportunity, the greatest gadget programming meant to make our lives easier.

What AI is today is really just algorithms being fed information and prompts—not robots capable of making decisions about governance or hunting down Sarah Connor (this is a Terminator reference, in case you missed it). It’s a growing sector made up of far more than just gimmicks peddled by tech bros online—it’s a huge area of study at the world’s top universities… and military contractors.

It has become big enough an item the United Nations just last week passed its first resolution on regulating the sector—though in a customarily toothless UN way that “encourages countries to safeguard human rights, protect personal data, and monitor AI for risks,” according to Reuters.

The resolution was non-binding, and also came from the top two powers in AI research and development, being the United States and China. They’re competitors across every sector, so their co-sponsoring of a file on AI is less a step in the right direction, and more an indication they’re both thinking about AI’s development potential—and what the other is doing in the field.

As a thought experiment, AI and its implications has a high ceiling, so I’ll narrow in on where I see AI as most offensive to me, personally: LinkedIn.

No, not the website itself, but the confected messages I get from those statuesque thought-leaders who reach out every so often to recommend I join them on such an illustrious journey of AI enrichment as… teach AI programs how to write, and potentially put myself out of a job?

Right out of the gate, you can consider me a hater. Automation of simple tasks is one thing, the data-scraping of human affectations and behaviours to be mashed into programs designed to mimic human expression is another.

The creative world is already reeling from the proliferation of AI programs that scrape the internet for intellectual property belonging to very real image content creators to pump out twisted and weird interpretations of art, while in the legal world (of all places) there’s the incredible case of a B.C. lawyer caught red-handed submitting AI-generated text that included fake citations to cases that didn’t exist—to name only a few examples of the technology gone wonky, and applied in sectors it probably shouldn’t be applied in.

In the UN resolution, players in the AI space were encouraged to “monitor AI for risks,” but perhaps there should be something more—like the air-gapping of entire professions from not just the influence of artificial intelligence, but its application entirely.

The defense technology sector is the big bad in this space (again, Terminator), but we don’t have to think so big, or so sci-fi, or so dangerous, to find egregious examples that threaten public confidence in institutions and information.

Consider the last few years of the online experience during COVID. How much information was out there that wasn’t true? Everything was generated by a human—imagine if those wanting to mislead could simply ask a program to come up with a suite of authoritatively-written pieces designed to look like studies by reputable (but not real) universities on just about any issue you can think of. The average reader already doesn’t check the efficacy of a claim unless it’s completely out to lunch as it stands right now.

It’s true most “AI” we come up against is just gimmicks and toys, but as mentioned already, this is a serious area of study. You can bet there’s a few bright sparks out there working on how they can merge AI programming with drone technology.

It’s a terrifying field, so no, I will not (willingly) be teaching any AI programs how to learn anything, and certainly not how to write (my critics will say I don’t know how to write, anyway). The thin edge of the wedge was a ways back, and we’re well past the point of no return (GPS, Alexa—all that data is going into this), but I don’t intend to help the sector along on its journey to world domination.

AI as a field is currently well within our grasp. It doesn’t take much of an imagination to wonder what the world could look like if that changes, or if the tech is applied by those who do not seek our betterment. So before it gets out of our control, we should think along the lines of the creative mind of Frank Herbert (who wrote Dune in the ’60s), and stop the field in its tracks.