"In other words, some users get the full experience, the one with all the words, all the context, and all the options. But if Nielsen’s AI thinks you have a disability, you’ll get a different experience, a simpler experience that’s more appropriate for people like you. It’s an ugly kind of paternalism with a new AI twist."
"Open source, open data, open training code, fully reproducible and auditable text embedding model"
"If ensuring quality is your responsibility, and the tool you’re using pushes bad quality your way, you are fighting against gravity in that situation. It’s you versus the forces of entropy."
"... there is little room to doubt that the current implementation of AI Assistants discourages code reuse. Instead of refactoring and working to DRY ('Don't Repeat Yourself') code, these Assistants offer a one-keystroke temptation to repeat existing code."
"I would not trust a large language model to... plan an itinerary in a new city, because I’m not a boring or unimaginative person who lets a cheap piece of plastic tell me to do the ten most common results for 'stuff to do in London.' [...] This latest push for AI is making the world lazier, less curious, harder to navigate, ripping people off, and creating a topic somehow more tiring than that year these people wouldn’t shut the fuck up about NFTs and then never brought it up ever again when the market imploded."
"Even after the lessons, students seemed to feel more confident with a traditional approach than with AI. Most felt low-to-moderate confidence about achieving their writing goals with AI, and even less confidence about how to use AI ethically. We hope with future research to figure out whether this insecurity is due to inexperience or endemic to AI tool use."
"There are a hundred and one reasons to worry about Elsevier mining our scholarship to maximize its profits. I want to linger on what is, arguably, the most important: the potential effects on knowledge itself. At the core of these tools—including a predictable avalanche of as-yet-unannounced products—is a series of verbs: to surface, to rank, to summarize, and to recommend. The object of each verb is us—our scholarship and our behavior. What’s at stake is the kind of knowledge that the models surface, and whose knowledge."
the claim here is that AI systems are 'different,' but it just looks like regular ol' 'building your software on other people's APIs' to me. except the APIs are garbage and they don't work
"We need to ensure that human creators are compensated, not just for the sake of the creators, but so our books and arts continue to reflect both our real and imagined experiences, open our minds, teach us new ways of thinking, and move us forward as a society, rather than rehash old ideas."
The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic's con
"The chatbot’s answers sound extremely specific to the current context but are in fact statistically generic. The mathematical model behind the chatbot delivers a statistically plausible response to the question. The marks that find this convincing get pulled in." this is really good but i wish it approached the topic of psychics with a bit less bro-ey skepticism
"[E]ducational technology is overly dominated by psychological conceptions of individual learning... AI-based personalized learning systems [are] based on notions of mastery and... statistical measurement," reflecting an "assumption that human intelligence is an individual capacity, which can therefore be improved with technical solutions — like tutorbots — rather than something shaped by educational policies and institutions."
"These AI jobs are [the] bizarro twin [of 'bullshit jobs']: work that people want to automate, and often think is already automated, yet still requires a human stand-in." [...] "When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious."
"AI is not a way of representing the world but an intervention that helps to produce the world that it claims to represent. Setting it up one way or another changes what becomes naturalised and what becomes problematised. Who gets to set up the AI becomes a crucial question of power."
after twenty years in this business i thought i'd seen some horseshit. but not until today did i truly see horseshit
"readings which popped up in the Twitch chat" during Stochastic Parrots day
"It matters that the first staticky voices we’ve dialed in with our massive, multi-billion-parameter arrays are dreamers, confabulators, and improvisers. It matters that Chess and Go, the sites where we first encountered their older, more serious siblings, are artworks. Artworks carved out of instrumental reason. Artworks that, long before computers existed, were spinning beautiful webs of logic and attention. Art is not a precious treasure in need of protection. Art is a fearsome wellspring of human power from which we will draw the weapons we need to storm the gates of the reality studio and secure the future."
on the symbols/network schism
from Joanne McNeil: "... artificial intelligence with personality could become an early twenty-teens kitschy retro artifact. [...] The Tesla Bot announcement seemed to pre-figure this. The person in robot costume danced to music that sounded reminiscent of Daft Punk’s Tron: Legacy soundtrack, which was released more than ten years ago. This isn’t the future. It’s an aesthetic that’s played out."
Martin's got the right idea
"To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on 'ethical AI' is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies. [...] It is strange that Ito, with no formal training, became positioned as an 'expert' on AI ethics, a field that barely existed before 2017. But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby."
louder for the people in the back. 'There is no such thing as “an artificial intelligence”.'
incl interview with Gene Kogan, many assignable small pieces
"The Transformer is nothing more than an architecture where the core functional unit is attention. You stack attention layers on top of attention layers, just like you would do with CNN or RNN layers."
"As machine learning algorithms are commoditized, those who can work along the entirety of the applied machine learning arc will be the most valuable."