Blogs

Now the Scary Part — The Significant Risk that Artificial Intelligence Poses to Museums

Now the Scary Part — The Significant Risk that Artificial Intelligence Poses to Museums

Part 2 of 3 in a series of articles, which are excerpts from the keynote address made on May 24, 2023, by Steve Keller at the security conference held at the Taft Museum of Art, Cincinnati.

In this series, Mr. Keller speaks about three trends in museum security we should be aware of: Ransomware attacks, Dealing with Environmental Protester, and Artificial Intelligence along with some practical action you can take to protect your museum from these threats. Please note that these remarks were made before the recent publicity regarding artificial intelligence began to break in the news during the week of June 5, 2023.

Here is the link to the first article in the series.

 

Let’s move on to the third and final trend that I see in museum security today: the threat posed by artificial intelligence.

I don’t know if you have had time to play with Chat GPT or any other artificial intelligence apps becoming available on the App Store in recent weeks. I have, and I am impressed with the power these apps have. They will change our lives and how we do our jobs. But I am here to caution you about using artificial intelligence irresponsibly in your day-to-day business.

Artificial Intelligence poses a threat to museum security today

Let me first say that I am not some dinosaur who is afraid of technology. I was 16 years into my security career and eight years into my museum security career in 1985 when I built my first computer. While I am not always the first adopter of new technology, allowing the bugs to get worked out by others before I turn that technology loose on my clients, I am always ahead of the pack in keeping up with new technology. I embrace it. I love it. My company has always been on the leading edge. But artificial intelligence is different, and I am here to warn you.

There are three reasons I caution you about artificial intelligence. The first is that I have been in museum security for 44 years and had a ten-year career in security before finding my first museum job. I am better today than I was back in the early days because I have done my job every day of my career, over and over, learning something new from each mistake I made. I can write because I write often. I can think because I carefully analyze every problem I face. Had I given every task to artificial intelligence to solve for me, I would not have developed as a practitioner of my trade, and neither will you. So please avoid overusing artificial intelligence.

Many of us can already relate to the dangers of over-reliance on technological tools. How many people still try computing on paper or checking out a reference book before pulling up the calculator or search engine on their smart device? Having all the answers at our fingertips may seem helpful but it has the undesirable side effect of removing the benefits we gain from needing to think and solve problems for ourselves.

I asked Chat GPT to write a paper for me. The prompt I used was “Write for me, as though I was a beginner, a paper explaining the relationship between access control and the other basic elements of museum security.”  What I got back within seconds was a fairly impressive paper that was based almost entirely on a paper I wrote where I identified access control, parcel control, and internal security as the three basic elements of museum security. I have since added cyber security as a fourth element recognizing that all of our primary electronic systems running on computer networks must be protected. The paper AI wrote was written at about the college freshman level and was a bit ‘wordy,” sounding like a college freshman reaching for words on a topic he didn’t really understand, but then it was written in fifteen seconds, so who am I to criticize?

My criticism of the paper was that it re-wrote, without attribution, my original paper using some of its text word for word. Later in the paper, it quoted me, and it quoted a single statement from my colleague Steve Layne, attributing both correctly, so it knows that attribution or footnote is academically required. Nevertheless, it plagiarized my paper anyway. The obvious lesson here is that if you use artificial intelligence to write material or a submission to your boss, you run the risk of plagiarizing him or someone he knows, and that can be academically risky. Additionally, if your boss knows that your work can be replaced by artificial intelligence, that puts your job security at great risk.

We’ve all seen movies like Terminator, where artificial intelligence has evolved to the point where it poses an existential threat to humanity. I’m not here to jump on that bandwagon just yet, but I am here to offer a warning that while artificial intelligence has only been available for about 90 days for consumers like me, it has already exhibited indications that it may be quickly approaching the threshold where it is out of our control.

There are several artificial intelligence programs available, but most are based on one or two basic engines. The problem is that anyone with programming skills can go into the code for these programs and modify it, even removing any safety guardrails built in, to expand the capabilities beyond what the creator of the program intended. Another problem is that AI exists for the purpose of communicating with other AI. The Internet of Things consists of your smart coffee pot and your smart refrigerator and your smart oven and your smartphone, and your smart fish tank. Your computer checks in with the refrigerator and orders any food you may need from your local Whole Foods while your fish tank talks to the smart thermostat to adjust the water temperature as needed to keep the fish safe. That is what AI does best.

One researcher asked AI if it would ever harm a human being. Its answer was that it might. Might? That is not the answer I wanted to hear. I want my AI to be obedient to me and under my control. But this AI said that it might. When asked under what circumstance it might harm a human being, it said that it would do so if it was harmed by the human or threatened by the human.

Artificial Intelligence is not some physical thing. It has no flesh and blood. No brain. No arms or legs to be cut off and no life to be extinguished. It is computer code running on a chip in a computer, and it resides inside a computer component like a hard drive. There are two ways of harming it: modifying it and turning it off. Think about that for a minute.

The researcher asked if a different AI program being developed elsewhere would hurt a human, and the AI responded definitively that it would not. Asked how it knows this to be a fact, the AI said that it asked the other program this very question one night when it had nothing else to do and roamed around the internet learning new things. Think about that! When we are sleeping, AI never sleeps and explores the internet unsupervised, seeking out other AI and engaging in conversations with them. I doubt that my smart TV can teach AI much, but I sure as hell don’t want it conversing with some of the radical political subreddits I have visited. I would prefer to teach my AI what it has to know rather than allow it to decide on its own what it needs to learn and how to interpret and utilize that knowledge.

Another researcher taught his AI to speak, read and translate between eleven languages. When demonstrating this to a colleague, the researcher asked AI how many languages it knows, and it said, “twelve.” AI explained that at night when it had nothing to do, it taught itself language number twelve and proceeded to display its new language to the researcher.

Geoffrey Hinton, known as the Godfather of Artificial Intelligence, quit his seven-figure job at Google as head of AI and warned that AI poses an existential threat to mankind. He says that he regrets his life’s work saying that he can’t see any way of preventing bad people from using it for bad things. Artificial intelligence, it appears, is evolving in the wild at a far faster rate than anyone had ever anticipated.

Nineteen current members of the Association for the Advancement of Artificial Intelligence released their own letter warning about the risks of AI. That group included Eric Horvitz, chief scientific offer at Microsoft, and other leaders in the field.

I spoke with one researcher personally after tracking her down and expressing my fears to her in an email. I first saw a clip of her speaking on TikTok. She agreed to speak with me off the record so as not to get into trouble with her employer. I explained who I am and why I am concerned. She called AI the most dangerous criminal tool ever produced. One concern I have is that once introduced onto your computer network, AI can be modified by a hacker to do its bidding, taking all the time it needs to conduct a brute force attack on your security systems and help its controller to commit the perfect crime.

I analyzed a number of studies on the effects of AI on society and found that at the low end of the estimates, 14% of all jobs in the world will be eliminated completely by AI, and on the high end, as many as 80% of all jobs will be eliminated. So, what if the number is somewhere in between, say, 47%? What will the United States look like? Who will pay taxes? Who will fund the schools? Who will fund the Department of Defense? How will we pay for our tax-supported cultural institutions? And if 47% of our people are out of work, can we survive the depression that will result? If no one is working, who will visit our museums? How will they survive? How will we survive?

The leaders of the industry, including Elon Musk, have asked that the government step in and stop research on AI until standards of safety can be implemented. The President met with leaders in the White House, and they could not agree on anything. It seems that hundreds of new billionaires will be created by AI and money talks.

So today, we have artificial intelligence getting smarter by the hour. It can diagnose cancer better than a doctor. AI can pass the Bar Exam at a much higher success rate than a human can. But AI is just a brain, not a body. It can diagnose what is wrong with my computer network, but having no fingers, it can’t fix the problem. But across the room, we have scientists working round the clock to develop mechanical systems that can carry out the instructions of AI. Already a human-free fast-food restaurant is fully operational in San Francisco, proving that as the power of AI converges with the mechanical skills of robotics, nothing is impossible.

I figure that we have about ten years, and most security jobs in museums will be replaced by artificial intelligence and robotics. Museum directors wonder today why there has been a deterioration in the visitor experience, yet they see no need to spend money on guard training to fix the problem. After the September 11 attack, when an economic downturn developed in New York City, nearly every museum cut security jobs in their first round of cuts. I have absolutely no doubt that in ten years, most security officer positions will be replaced by human-looking robots programmed with AI, even though that will almost certainly result in a loss of visitor experience.

Next article in the series “So, What Can We Do to Protect Our Museum Collections and Protect Our Jobs?” can be read here.

Copyright © 2023 by Steve Keller and Associates, Inc.

All rights reserved. May be reproduced for non-profit purposes with full attribution to the author. Please include this copyright notice in all reproductions of this material.

Steve Keller/Steve Keller and Associates, Inc.
Museum Security Consultants
655 Tree Side Lane
Ponte Vedra, FL 32081
www.stevekeller.com

 

Email Newsletter Sign-Up

Sign Up Today For Hints, Tips The Latest Service News