Search This Blog

Sunday 20 May 2018

The Dark Secret at the Heart of AI

Comment: This is an interesting article from last year that touches on the known, official nature of Artificial Intelligence and its dangers. It's very much the tip of a very large iceberg and although a welcome cautionary tale, it doesn't even touch on the deeper dangers away from commerical and consumer applications.

When we factor in the inevitable black projects combined with energy directed weaponry and mass mind control then it opens up a Pandora's Box that is highly secretive and well away from any kind of public oversight, including the normal parameters of State control. This is indeed an area that is up and running. 

We have no idea how far this technology has come in corralling the mass mind but given the probability that such a box of tricks is deeply seductive to the psychopathic mind presently dominating the corridors of power, it is more than probable that experimentation has been undertaken in concert with, and as an outgrowth of MK ULTRA and other programs which were never dismantled but merely outsourced, DARPA being the public hub of these "creative" endeavours.

Much more research is needed, not least some whistleblowers. However, for obvious reasons, such people are extremely aware that to come forward about these black ops means  they effectively paint a target on their backs and become the next line in experimentation. This has likely happened many times.

Who would want to willingly place themselves in that particular line of fire?  Hence the in-built protection that such projects are afforded.  

And when engineeers still don't really understand how some aspects of AI works then we really have a recipe for a technology that can grow it's own consciousness and thus be out of the control of human rules.  

Please read my Technocracy series for further information.

-----------------------

Will Knight
MIT Tech Review

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
 
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

Read more

No comments:

Related Posts Plugin for WordPress, Blogger...