today's leftovers
-
Web Browsers
-
Facundo Olano ☛ Reclaiming the Web with a Personal Reader
It took me about three months of (relaxed) work to put together my personal feed reader, which I named feedi. I can say that I succeeded in reengaging with software development, and in building something that I like to use myself, every day. Far from a finished product, the project feels more like my Emacs editor config: a perpetually half-broken tool that can nevertheless become second nature, hard to justify from a productivity standpoint but fulfilling because it was built on my own terms.
I’ve been using feedi as my “front page of the [Internet]” for a few months now. Beyond convenience, by using a personal reader I’m back in control of the information I consume, actively on the lookout for interesting blogs and magazines, better positioned for discovery and even surprise.
-
-
Licensing / Legal
-
Kyle E Mitchell ☛ Unlimited Indemnity for Unpaid Developers?
If you read this blog for open software licensing and policy, have a look at James Bottomley’s latest, “Solving the Looming Developer Liability Problem”. James shares his view on why open software licenses have warranty disclaimers, how open source went from amateur to commercial, how the recent lawsuit against Bitcoin developers fits in, and what it all means for new laws like the EU’s Cyber Resilience Act. Then he reminds us of section 9 of the Apache License and proposes that open developers expand on it to require that users or financially cover them for new legal liabilities from use of their work.
A few thoughts came to mind as I read James’ post.
-
-
Open Data
-
Rlang ☛ Météo-France Open Data
Great news! Météo-France has started to widen its open archive data. No API so far and a lot of files… What can we do?
-
-
Open Access/Content
-
Scoop News Group ☛ Open access to AI foundational models poses various security and compliance risks, report finds
IST’s report outlines the risks that are directly associated with models of varying accessibility, including malicious use from bad actors to abuse AI capabilities and, in fully open models, compliance failures in which users can change models “beyond the jurisdiction of any enforcement authority.”
While opportunities can arise from a more accessible AI foundational model, the report states that “once models are made fully open, it is impossible for developer organizations to walk back a model’s release.”
-