A Start to Futuring

 Starting this blog off with a formal post. As it progresses (and time permits), I plan the posts here to be casual representations of what I may have posted elsewhere. Give R.U.R a read if you have time. The original play was written a few years earlier in Czech (Slavic), but the version referenced here is the English version of the play.


Artificial General Intelligence (AGI), also called strong artificial intelligence, is an advancement in AI whereby machine intelligence would equal that of human intelligence. This theoretical endeavor pursues progress in AI in the tradition that human intelligence is the benchmark by which intelligence on earth is measured, and machine intelligence should strive to reach the level of innate intelligence in humans. This tradition is juxtaposed against another AI philosophy, wherein machine intelligence need not advance toward human intelligence. Instead, there may be other ways for machines to exceed the intelligence of humans that are more conducive to computing. Korteling et al. (2021) would convince us that machine intelligence is fundamentally different for the foreseeable future, and an acceptable notion is that human intelligence is not the only form of machine intelligence. 

The 2023 Gartner Hype Cycle for AI suggests that AGI is transitioning to a phase where the hype will peak, after which it will head toward Gartner's “trough of disillusionment.” Gartner places the achievement of AGI beyond ten years. You can see the AI hype cycle at https://emtemp.gcom.cloud/ngw/globalassets/en/articles/images/hype-cycle-for-artificial-intelligence-2023.png.

Barriers to Achieving Artificial General Intelligence

Advances in AGI have many hurdles. The primary current hurdle is technical. Achieving general intelligence in machines requires progressing architectures and software to the capabilities of human biology. Much research focuses on the human brain, specifically cognitive decisions. There is nascent research in modeling human neurons in computer hardware. This branch of AI is neuromorphic computing, and it is comparatively new, having manifested in silicone fewer than ten years ago. As advanced as neuromorphic processing units (NPUs) might be, there is a long way to go before they reach the construct of the human brain. 

Moreover, human intelligence is not isolated to the brain. Biological neurons occur throughout the human body to provide input to the human cognitive process. This biological distribution is called qualia. An easy example of qualia is haptic stimuli, the sense of touch. A hand can feel the heat from a fire, the sensation of which is passed to the brain. A typical cognitive decision based on that stimulus, the feeling of heat, would direct the hand not to touch the heat source. Vision (ocular), hearing (aural), taste (gustatory), and smell (olfactory) are similar stimuli passed to the human brain for cognitive processing. Some of these senses have digital representations, such as a webcam or haptic gloves, but the level of qualia capability in machines does not achieve human capabilities. Nor does the processing of different stimuli, such as images and audio, via artificial neural networks (ANNs) achieve the level of cognitive efficiency in the human brain.

The technical barriers seem insurmountable, but if those barriers are breached, the social and ethical barriers will significantly increase. There is already widespread public concern about the social ramifications of AI. Examples of issues already discussed in the press include the displacement of creative workers and inherent bias in AI systems. If AGI were to be achieved, or once achieved, the social debate would escalate to a heated roar as communities of people rage about the displacement of humans on one side and the philosophical rights of AGI machines on the other. Having achieved a level of consciousness, one of the theoretical benchmarks of AGI, there will undoubtedly be a segment of society that will argue that such machines have innate rights. This condition is the basis for a play by early 20th-century playwright Karel Čapek called R.U.R (Rossum's Universal Robots) (Capek et al., 1923). For more than 100 years, society has contemplated this dilemma despite the impossibility at that time. 

Ethical concerns about AI have resurged, resulting in significant research into explainable AI (XAI). The ethics of AI in society has reached regulatory oversight, with the European Union (EU) already having drafted AI legislation. Claims of bias are rampant, particularly in recommender systems where life-altering decisions are made, partly based on AI system recommendations (The Berkman Klein Center for Internet & Society, 2019). We can imagine a time when a self-aware AGI system is accused of bias against humans. Those future debates are staggering to ponder.

Conclusion

Research into AGI will continue. Though AGI is likely 20-50 years in the future, there will be a time when humans will create an intelligent machine that closely imitates the intellect of humans. If history can be any lesson, the technical barriers will be leveled, leaving the societal and ethical aftermath for future generations to clean. 

References

Capek, K., Selver, P. (Ed.), Playfair, N. (Ed.). (1923). R.U.R. (Rossum’s universal robots). Doubleday, Page, and Company. https://www.gutenberg.org/files/59112/59112-h/59112-h.htm

Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom A. R. (2021). Human- versus artificial intelligence. Frontiers in Artificial Intelligence, 4https://doi.org/10.3389/frai.2021.622364

The Berkman Klein Center for Internet & Society. (2019, August 19). Please stop doing "explainable" ML - Cynthia Rudin. YouTube. https://youtu.be/I0yrJz8uc5Q

Comments