![](https://ogden_images.s3.amazonaws.com/www.timesrepublican.com/images/2025/02/09211251/PXL_20250208_1951238053.jpg)
TR Photography Dolly Tanmen Scott Samuelson of Iowa State’s Faculty of Philosophy and Religion presented “AI and The Human Future” at the Marshalltown Public Library on Saturday afternoon.
The Marshalltown Public Library, the Humanities Iowa, and the Iowa Historical Society co-hosted a thought-stimulating program at the library on Saturday, titled “AI and the Human Future.” Presenter was Scott Samuelson, with an expansion and outreach of the Iowa State Department of Philosophy and Religious Studies.
He spoke about ethical, moral, political and even environmental concerns related to artificial intelligence.
Samuelson says that AI can “predict, recommend, or make decisions that impact a fully reliable, real or virtual environment for a particular set of human defined objectives.” “System” was defined.
In 1985, Russian chess grandmaster Gary Kasparov played chess against the 32 best chess computers in the world. He won against all of them. In 1996, Kasparov was defeated
“Deep Blue.” An IBM-powered chess computer. The following year, Deep Blue defeated Kasparov. The headline for Newsweek Magazine about the story was “The Last Stand of the Brain.”
It will progress to 2005 when PlayChess.com hosted a wide open tournament. Some participants were grandmasters, some were amateurs, and computers were allowed. The winners were two American amateur players using three chess computers.
Kasparov concluded from now that weak humans and machines with better processes are better than powerful computers and surprisingly better than powerful humans and machines.
Samuelson compared this to what happened after the camera’s invention. Suddenly, portrait painters were no longer in demand. As a result, painters had to reconstruct their art, and the world of art has been transformed. Additionally, the camera opened up portraits to everyone. Even amateurs can take photos. The mass media has also been transformed. This could become unstable as well as democratization, as we saw with the Nazi Party, which uses mass media as propaganda.
Three AI roads are open to us. 1.) Alternative: What can AI do for us? 2.) Collaboration: What could be enhanced by collaboration with humans? 3.) Certified Organic: What is important for humans to do without AI? The big question raised by AI is who we are, what we care about, and how we reinvent our institutions to focus on what we and our institutions matter. Can you do it? Do you want to rely solely on AI for medical diagnosis? How comfortable is it in a hospital, minus the human interaction we normally expect?
“Good Old AI” (GOFAI) is where certain tasks in a particular environment are programmed into machines. An example is a floor cleaning robot. The problem with this is that it is not easy to program everything you might encounter in most situations.
task. How do you program your floor cleaning machine to properly respond to a situation?
Beyond that, there is also machine learning, which allows you to understand how your goals are programmed into the machine and how to achieve them. In monitored learning, AI is given concrete feedback to learn to perform tasks like learning from flash cards. In unsupervised learning, AI is like finding previously unspecified patterns and understanding things for yourself.
Plus, there’s a narrow AI. This is an AI system designed to perform certain tasks, such as playing chess or cleaning the floor. Artificial General Information (AGI) is similar to HumanLevel AI or Superintelligence. This allows you to perform several different tasks, including tasks that are not specifically designed, such as playing chess, answering questions, or driving a car.
We’ve all heard of algorithms developed to generate the intended behavior of AI.
These algorithms are formed in academic, corporate and government research environments by people with doctoral degrees in mathematical fields. AI models are trained by data scientists or analysts. Analysts may or may not advance their mathematical expertise.
This data training often requires vast amounts of energy and water use that have an impact on the environment.
Samuelson noted that AI products are usually sponsored by powerful organizations with profit motivating. Open source AI, on the other hand, is freely available to everyone and provides counters to transparency, community-driven ethics and exclusive control.
Next, I have the question “humans and machines.” AI can already outperform humans in many tasks, but what can’t be done or not? We must not forget that AI lacks common sense and sometimes “hastising” or creating things, and that creativity is neither really exciting nor innovative. It moves quickly, but it lacks awareness of “not knowing” rather than simple research. AI can only fake EI or emotional intelligence. (Do we really want AI to write loving letters to our mothers?) Finally, AI does not suffer, love, or eat. Human intelligence is biological, says Samuelson. We are hungry so we learn how to make food.
One challenge facing AI use is overly reliable in it. After all, it’s not really particularly clever! As we become more dependent on AI, we lose the skills to become good stewards of it.
Samuelson calls it a “desk.” There is also fear of mass unemployment. Does AI make more work irrelevant rather than creating new jobs? AI may also intensify evil if a bad actor hacks it and uses it for evil purposes.
A transport case study provided by Samuelson was the crashes of two Boeing 737s caused by malfunctioning AI sensors in 2018 and 2019. The first problem was, “AI makes a mistake.” The second problem was that “humans make mistakes” because the pilots were unable to disable or were not trained to disable the malfunction. The resulting risk of AI in this transport case is that the pilot becomes Descar, the pilot becomes unguarded, fewer pilots, and finally the bad actors find a way to hack the system and deceive the AI system.
After saying more than that, another major concern for many is the depletion of resources. AI consumes a huge amount of water and electricity. Do AI benefits outweigh the amount of resources it consumes? The Economist magazine declared, “The world’s most valuable resource is no longer oil, it’s data.”
Next, is there a question of who should control our data? Does government power need to control it or is it corporate power? In the US, corporate power holds the reins. In China, government powers are available. You can argue that neither is a good option.
Samuelson concluded his presentation in support of the humanities and said that he still needs art and crafts. We still need to be deeply connected to the world and each other. Let’s not allow AI to dehumanize us so that it can be useful in situations and circumstances.
Toledo – Citizens increased their pace, leaned against the wall and took part in a gentle conversation in the main hall…
President Donald Trump deploys federal agents to implement a massive deportation plan at the beginning of his term…