Validating suffering

Posted by / 02-Dec-2017 08:23

Validating suffering

In line with the attitudes of my peers, I assumed that Kurzweil was crazy and that while his ideas deserved further inspection, they should not be taken at face value.

On the other hand, it would not necessarily respect human values, including the value of preventing the suffering of less powerful creatures.

I now think it's quite likely (maybe ~75%) that humans will produce at least a human-level AI within the next ~300 years conditional on no major disasters (such as sustained world economic collapse, global nuclear war, large-scale nanotech war, etc.), and also ignoring anthropic considerations.

The "singularity" concept is broader than the prediction of strong AI and can refer to several distinct sub-meanings.

There were stylistic differences, such as computer science's focus on cross-validation and bootstrapping instead of testing parametric models -- made possible because computers can run data-intensive operations that were inaccessible to statisticians in the 1800s.

But overall, this work didn't seem like the kind of "real" intelligence that people talked about for general AI.

validating suffering-47validating suffering-14validating suffering-73

For general background reading, a good place to start is Wikipedia's article on the technological singularity.

One thought on “validating suffering”

  1. You can put your attention wherever you want it to be. One college junior tried to capture what is wrong about life in his generation. “It’s what texting does to our conversations when we are together that’s the problem.”It’s a powerful insight.