Web Browsers and Proprietary 'AI'-Washing by Microsoft
-
Terence Eden ☛ A quick look inside the HSTS file
You type in to your browser's address bar example.com and it automatically redirects you to the https:// version. How does your browser know that it needed to request the more secure version of a website?
The answer is... A big list. The HTTP Strict Transport Security (HSTS) list is a list of domain names which have told Google that they always want their website served over https. If the user tries to manually request the insecure version, the browser won't let them. This means that a user's connection to, for example, their bank cannot be hijacked. A dodgy WiFi network cannot force the user to visit an insecure and fraudulent version of a site.
-
Jake Lazaroff ☛ The Website vs. Web App Dichotomy Doesn't Exist
A more nuanced view is that there’s a spectrum between website and web app, and that where a project sits determines which technologies are appropriate to build it. The implication is that at some point, it makes sense to use a JavaScript framework rather than progressively enhanced HTML. Web developers tend to divide themselves into roughly two camps here — and depending on which camp you ask, the location of that inflection point varies widely.
-
Proprietary/Artificial Intelligence (AI)
-
Gizmodo ☛ Microsoft’s ‘AI Browser’ Edge Is a Precursor to the ‘AI’-ification of Everything
Microsoft is trying to rebrand its Edge [Web] browser. No longer should its name remind you that its icon shortcut sits alone and forgotten at the edge of your Windows desktop. Now Microsoft is trying to claim Edge is on the cutting edge of AI. The Redmond tech giant has started calling its native internet explorer “Microsoft Edge: AI Browser.” If you think that’s already a little on the nose, expect more companies to do so in the coming year.
-
The Register UK ☛ Microsoft disables Windows app installation, again • The Register
-
Ars Technica ☛ ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate
ChatGPT is still no House, MD.
While the chatty AI bot has previously underwhelmed with its attempts to diagnose challenging medical cases—with an accuracy rate of 39 percent in an analysis last year—a study out this week in JAMA Pediatrics suggests the fourth version of the large language model is especially bad with kids. It had an accuracy rate of just 17 percent when diagnosing pediatric medical cases.
The low success rate suggests human pediatricians won't be out of jobs any time soon, in case that was a concern. As the authors put it: "[T]his study underscores the invaluable role that clinical experience holds." But it also identifies the critical weaknesses that led to ChatGPT's high error rate and ways to transform it into a useful tool in clinical care. With so much interest and experimentation with AI chatbots, many pediatricians and other doctors see their integration into clinical care as inevitable.
The medical field has generally been an early adopter of AI-powered technologies, resulting in some notable failures, such as creating algorithmic racial bias, as well as successes, such as automating administrative tasks and helping to interpret chest scans and retinal images. There's also lot in between. But AI's potential for problem-solving has raised considerable interest in developing it into a helpful tool for complex diagnostics—no eccentric, prickly, pill-popping medical genius required.
-