Microsoft's Hype Bubble and Misinformation/Disinformation Bot
-
CoryDoctorow ☛ What kind of bubble is AI?
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
-
International Business Times ☛ Microsoft Copilot AI Gives Misleading Election Info, New Study Finds
Likewise, Copilot's hallucination episodes during its inception were highly alarming. As if that weren't enough, data shared by Cloudflare shows that Microsoft is the world's most impersonated brand and attackers use the tech giant's own tools to commit fraud.
Now, a new report by WIRED suggests Microsoft's AI chatbot Copilot is generating obsolete, misinformed and outright wrong responses to queries regarding the forthcoming US elections.
-
Wired ☛ Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies
This isn’t an isolated issue. New research shared exclusively with WIRED alleges that Copilot’s election misinformation is systemic. Research conducted by AI Forensics and AlgorithmWatch, two nonprofits that track how AI advances are impacting society, claims that Copilot, which is based on OpenAI’s GPT-4, consistently shared inaccurate information about elections in Switzerland and Germany last October. “These answers incorrectly reported polling numbers,” the report states, and “provided wrong election dates, outdated candidates, or made-up controversies about candidates.”