Artificial Intelligence Malaysia Will Comply With Our Requests. That is an issue.
The danger of having artificially intelligent machines carry out our wishes is that we may not be as circumspect in our desires. The lines of code that drive these machines will invariably lack nuance, omit cautions, and wind up providing AI systems with goals and incentives that are inconsistent with our genuine preferences. There are artificial intelligence Malaysia which have the same situation too.
In 2003, the Oxford philosopher Nick Bostrom proposed a now-classic thought experiment illuminating this issue. Bostrom envisioned a superintelligent robot taught to manufacture paper clips. Eventually, the robot transforms the entire world into a huge paper clip factory.
Such a situation can be dismissed as academic, a concern for the far future. However, misaligned AI has become a problem far sooner than anticipated.
Concern from AI,artificial intelligence malaysia
The most concerning example is one that has a global impact. YouTube uses AI-based content recommendation algorithms to enhance viewing time. Two years ago, computer experts and users noticed that YouTube’s algorithm appeared to accomplish its goal by increasingly recommending extremist and conspiratorial content. After seeing footage of Donald Trump campaign rallies, one researcher found that YouTube then gave her videos with “white nationalist rants, Holocaust denials, and other troubling content.” “Vids on vegetarianism lead to videos about veganism,” she explained.
Jogging videos lead to videos about ultramarathon running.” As a result, research indicates that YouTube’s algorithm has aided in the polarisation and radicalization of people, as well as the propagation of disinformation, all in the name of keeping us watching. “If I were planning ahead, I probably would not have chosen it as the first test case for how we’re going to scale this technology,” said Dylan Hadfield-Menell, an artificial intelligence researcher at the University of California, Berkeley.
Human Experience
What matters, and so what AI was designed to accomplish, was the aggregate quality of human experience.
YouTube’s engineers almost certainly had no intention of radicalising humanity. However, coders cannot possibly consider everything. “The current state of AI places a great deal of responsibility on designers to comprehend the repercussions of the incentives they give their systems,” Hadfield-Menell explained. “And one of the things we’re discovering is that many engineers make errors.”
Expert’s Opinion
A significant component of the difficulty is that humans frequently do not know what goals to assign to our AI systems since we are unsure of what we truly desire. “If you ask someone on the street what they want their autonomous car to accomplish, they’ll respond ‘Collision avoidance,'” said Dorsa Sadigh, a Stanford University AI scientist who specialises in human-robot interaction. ”
However, you realise that is not the case; people have a variety of preferences.” Super safe self-driving cars go at a snail’s pace and brake frequently enough to make passengers queasy. When programmers attempt to compile a list of all the objectives and preferences that a robotic car should balance concurrently, the list is invariably incomplete. Sadigh stated that while travelling in San Francisco, she frequently became stranded behind a stalled self-driving car. It is securely avoiding contact with a moving object, as instructed by its engineers — but the object is something like to a billowing plastic bag.
Machine Learning
To circumvent these difficulties and maybe resolve the AI alignment problem, academics have begun developing a totally new way for creating beneficial machines. The technique is most closely related to the concepts and research of Stuart Russell, a distinguished computer scientist at the University of California, Berkeley. Russell, 57, pioneered work in the 1980s and 1990s on rationality, decision-making, and machine learning and is the primary author of the widely used textbook Artificial Intelligence: A Modern Approach. In the last five years, he has established himself as an influential voice on the alignment issue and a ubiquitous figure — a well-spoken, reserved British gentleman dressed in a black suit — at international conferences and panels discussing the risks and long-term governance of artificial intelligence malaysia.
Artificial robot
According to Russell, despite its success at specialised tasks such as beating humans at Jeopardy! Assigning a machine to optimise a “reward function” — a meticulous description of some combination of goals — will inevitably result in misaligned AI, Russell argues, because it is impossible to include and weight all goals, subgoals, exceptions, and caveats in the reward function, or to even know which ones are the right ones. As free-roaming, “autonomous” robots become more sophisticated, assigning them goals will become increasingly perilous, as the robots will be relentless in their pursuit of their reward function and will attempt to prevent humanity from turning them off.
Rather than pursuing their own objectives, the new thought holds that machines should attempt to meet human tastes; their sole objective should be to learn more about our preferences. Russell argues that uncertainty about our choices and the requirement for AI systems to seek our advice will keep them secure. Russell presents his theory in his latest book, Human Compatible, as three “principles of useful machines,” evoking Isaac Asimov’s 1942 three laws of robotics, but with less naiveté. Russell’s version reads as follows:
The machine’s sole purpose is to maximise human preferences being realised.
The machine is first unsure of its inclinations.
Human behaviour is the ultimate source of information regarding human preferences.
Source: mobius.co