Not really. What it is proof of is how the engine addresses decision making when it has ambivalent information. First rule of AI conversation bots is to try and avoid conclusive opinions either way. So it approaches the question “Do you like Hilter?” with the same gravitas as “Do you like pineapple pizza?” It’s a…