In line with the attitudes of my peers, I assumed that Kurzweil was crazy and that while his ideas deserved further inspection, they should not be taken at face value.
On the other hand, it would not necessarily respect human values, including the value of preventing the suffering of less powerful creatures.
I now think it's quite likely (maybe ~75%) that humans will produce at least a human-level AI within the next ~300 years conditional on no major disasters (such as sustained world economic collapse, global nuclear war, large-scale nanotech war, etc.), and also ignoring anthropic considerations.
The "singularity" concept is broader than the prediction of strong AI and can refer to several distinct sub-meanings.
There were stylistic differences, such as computer science's focus on cross-validation and bootstrapping instead of testing parametric models -- made possible because computers can run data-intensive operations that were inaccessible to statisticians in the 1800s.
But overall, this work didn't seem like the kind of "real" intelligence that people talked about for general AI.
For general background reading, a good place to start is Wikipedia's article on the technological singularity.