Can artificial intelligence (AI) think outside the box?
If the box, now used by ChatGPT, an early leader in AI applications, is mainly a database generated by humans, is a superhuman construct possible? And how do we characterize manipulation of data to generate desired results? False; should there be a gatekeeper?
Or will AI itself conclude that the box of human patterns is inherently fragile, discern our weaknesses and break out? And then push us beyond our capacity to cope? Or, push us beyond a principled existence informed by timeless values? What about brain downloads (transplants) for the rich, for example? A 21st Century form of eugenics.
Let me begin with a useful definition. Synthesis: “the composition or combination of parts or elements so as to form a whole.” It is certainly anticipated that when the right questions are asked and the right prompts triggered that new combinations enabled by technology breakthroughs will be pathbreaking—create new wholes.
Yet, I wonder. Will AI’s responses over time lead to another kind of AI (artificial intuition)? Should we anticipate software informed entrepreneurship? Is it possible that public affairs leaders will find efficacy outside their well-worn campaign myths? Can bias be tested and dismissed with AI the new referee?
And what about us? Yeah, us. Are we, the human race, edging closer to obsolescence? Is it possible that a relatively small subset of people using advanced AI will displace large swaths of so-called knowledge workers? Robots v. Humans. From film to reality.
Automated intelligence, my preferred name, has been around for a while. Long-range weather forecasting to war gaming to various search engines are not new. Yet, as AI 2.0 or 3.0 or 4.0 emerge, our sense of self and control risk being victimized.
AI 1.0 has, to date, been mainly a useful tool for efficiency, facilitation and research. Now it is rapidly becoming a phenomenon that if used constructively can help us make better decisions. Or provide more services. Sure, it will make some decisions for us, but at the leading edge of discovery it should be more of a helper than decider. But?
Regardless of its positive potential, there will be downsides. Leadership will be tested over and over. And those tests will not be simple. AI will produce new findings and hypotheses; humans have to make sure they are in control of the final step. Or, will that be preferred? Competition will test that proposition as the leading edge will produce results that will shake our walls of comfort. Technology and capitalism are insistent. And again, deception today is not in short supply and the new tools will make it easier and potentially more compelling.
But if asked in a court hearing why you did something, it will not be acceptable to blame AI. Humanity has to be responsible for the use of the tools it has designed and pushed. Right now, the disciplines that use AI should agree and practice source identification. If AI is the source, it should be identified just as we attribute content to writers or independent sources like, for example, the Associated Press (AP). And the identification should stand out.
The tools and potential of AI are orders of magnitude more important than the browser which opened up a wealth of information. Best that we turn to leaders in public affairs that understand, to a degree, this rapid development. Maybe the first question in a Presidential debate should be: How will you use AI to the benefit of Americans? Second question: What will you do to make sure AI doesn’t double back on humankind? Perhaps the public will be able to discern if the intelligence of the candidate is the source of an answer, or if he/she defaulted to artificial intelligence with an emphasis on artificial. Finally, in finishing this brief essay I asked, using ChatGPT, about the impact of AI on society. Here is what came back:
“The impact of artificial intelligence (AI) on society can be both constructive and potentially destructive, depending on how it is developed, deployed, and regulated. AI has the potential to greatly benefit society by improving efficiency, automating tasks, and solving complex problems. It can enhance healthcare, optimize transportation systems, and aid in scientific research.
However, there are also concerns about the negative aspects of AI. Misuse of AI technology could lead to privacy violations, algorithmic bias, and job displacement. There is a need for ethical guidelines, responsible development, and careful regulation to mitigate these risks and ensure that AI is used for the greater good.
Ultimately, whether AI is more constructive or destructive will depend on how we as a society collectively navigate its development and deployment, while ensuring proper safeguards are in place to address potential risks.”
Not bad. But we will need to keep in mind “ensuring proper safeguards”. There will be many faceoffs between “survival of the fittest” and timeless values. Societally, do we embrace timeless values and can we use them?
Al Sikes is the former Chair of the Federal Communications Commission under George H.W. Bush. Al writes on themes from his book, Culture Leads Leaders Follow published by Koehler Books.
Write a Letter to the Editor on this Article
We encourage readers to offer their point of view on this article by submitting the following form. Editing is sometimes necessary and is done at the discretion of the editorial staff.