kenlundgreen
spinstarholomeme
kenlundgreen

There’s certainly no point in making basic working algorithms conscious — we just have to make sure that our society won’t be outright run by dumb narrow AIs. What I’m trying to say is that I have a far easier time dealing with the concept of an AI with a human-level or superior consciousness than with a narrow

I often wonder if it’s right to consider the go/chess/conversation bots as “human-like intelligence”. We’ve supplied a computer with an advanced algorithm for mathematically determining the optimal solution to the problem.

The Turing Test has already been brought into question. Some chatbots are already close to passing it, while some actual human beings with autism or similar neurological disorder don’t. The problem is that just because a machine can simulate the human behaviour on a specific, narrow field such as everyday conversation

All this boils down to the basic issues that have concerned me for some time: artificial stupidity is more dangerous than artificial intelligence and anthropomorphising machines that weren’t made to imitate human behaviour outside a narrow, specific function can mislead us into making terrible mistakes about their