Artificial intelligence doesn't seem to be leaving anytime soon. In fact, even the most mundane applications are now incorporating this feature into its functionality. It is no longer limited to creative industries or technical industries; even banks are making this available to their customers to use in the name of "budget management." Its reach is unlimited, and its potential likely unparalleled. All we can do is wait and see what advances will be forthcoming and the risks it may have around privacy and data breaches.
I'm not the type to get caught up in conspiracy theories or end-of-the-world doom. As a Christian, I hope that I live each day filled with the hope of the Lord and His Holy Spirit to discern what is happening in our world without panicking or becoming sensational. "For God has not given us a spirit of fear, but of power and of love and of a sound mind." - 2 Timothy 1:7 In how quickly knowledge and application have been accelerating—as compared to world history—I cling to having a sound mind most of all.
So far, from what I've seen, it is optimistic to believe that many challenges can be solved by artificial intelligence. In theory, the concept means that life will be better; we are so technologically dependent now that to get off the grid would involve more work than simply remaining where we are. I don't doubt that there have been groups of people who have been successful without technology; however, it has become more advantageous for companies to switch to this manner of doing business. Paper coupons? Why use those when you can develop an app that goes onto people's devices? Flyers? Why bother when you can email people about sales directly and market other things to them now that you're in their inbox?
At the same time, I am appreciative of those who are sounding the alarm about how quickly AI is growing in usage and the concerns they have around the rights of individuals whose material is being analyzed, studied, and eventually regurgitated out amongst the requests being made to the platform. Having worked with passionate, intelligent, and earnest Machine Learning engineers and Natural Language Processing teams, I can't fault them for doing their jobs when the goal is to automate information and process it in such a way that scientific discoveries and breakthroughs can be made. However, much like the Tower of Babel, when is enough enough?
In summary, I think the heart of the issues we're starting to encounter with AI include the following:
- The thirst for possessing information is insatiable. There is always value in procuring more data, to the point where if the Internet has to be scraped or if information has to be collected unethically, it will be done. Are there any recourses? Can information, like rumours, once proliferated be taken out or taken back? That's highly doubtful.
- There is no governance or accountability for usage. The reason there is moral ambiguity is because it's people who are using these tools, for good or for evil, without checks and balances in place. Technology is not exclusively entrusted to particular stakeholders who understand the need for societal responsibility; anyone can access it and as such, anyone can use it to their advantages.
- Ownership and intellectual property no longer belongs to the human who generated the content in the first place. When art and science can be mimicked, there is no longer a distinction as to who retains what. If art can be generated in the style of an artist, how can people tell the differences between what an artist has painstakingly worked toward building in their life and imitation that can take seconds to generate?
- For readers, AI has the potential to take over the humanity of storytelling. Recent news about AI-generated books, copyright infringement on 183,000 books, and even audiobook narration all point toward how creatives are being impacted. While the flipside of using AI means that there can be more exposure and access to certain works (see Greg Britton's interview), I think most would agree that the tools should still be used properly and with consent. After all, AI should serve the artists, not misappropriate their work for others' profit or entertainment. If it comes to the point where AI can tell a better story about humanity than people can, where would that leave us?
- We don't know what we don't know. We don't know the reach of AI in our lives, acknowledging that it may already have done so. At the end of the day, AI can give us the illusion of control, but once we relinquish our personal information, it no longer belongs to us. The analogy I think of is from a former classmate who told me that when we drive, we are trusting everyone else on the road not to hit us. In the same way, when we offer data to other companies, platforms, services, etc., we are trusting them not to abuse that right. Yet, car accidents happen every day. Information gets hacked every day. People can lose anything to technology when it's in the wrong hands.
- AI can condition us to bypass discipline. While technology can be a tool to help us become efficient, I won't deny that it can also help us to become lazy. I think there can be forms of good laziness, where we don't need to continually be productive in order to possess worth. I also think there's the bad kind of laziness, where over time, certain disciplines or abilities disappear in favour of what's easiest or convenient. This extends beyond people in later generations no longer being able to read analog clocks or cursive writing; with the advent of AI that can write for you (or any other myriad of things), why would people be incentivized to take the time to hone their craft? One of the "passive income" hacks I've seen has been to get AI to write children's books for you that you can then sell to generate a profit. Not only does this raise multiple copyright issues, what would be the point in studying children's literature, understanding child psychology, and personally learning from other children's books—written by humans—when a website can do this for you? Obviously, we hope that we can discern the difference, but what if over time, this becomes more difficult than envisioned?
What have my conclusions so far been? I recognize that it's a field I have more to learn about. Funny enough, when I worked with a technology company, I was exposed to lots of information about potential usage, but very little on regulation and ethics. This has made me more wary about jumping in with both feet into the widely-available tools that are out there now.
My stance for now has been to avoid using AI tools where possible. I know it's integrated into some platforms (e.g. email, etc.), but I've been careful not to visit websites that explicitly offer AI. I'm in the minority, but to this day, I have not accessed ChatGPT, or any of the AI functions offered on sites like Canva. I don't know how long I can maintain this position, but I want the words I write to genuinely come from me. If I am to ever use AI as a tool, I want it to be precisely that—a tool to help me create, not a tool to do the creation for me. I also haven't created art using AI, and will gladly post imperfect pictures on my Instagram. It's a difficult topic fraught with many complications and I don't judge others who have used these tools. It's more that I'm cautious in adopting new technology if I don't fully understand the risks and what harm it may be causing to others.
Artificial intelligence may be unavoidable, but let's do everything we can to elevate human intelligence and the image of God that we possess.
Comments
Post a Comment