"Defining AI along political and ideological language allows us to think about things we experience and recognize productively as AI, without needing the self-serving supervision of computer scientists to allow or direct our collective work. We can recognize, based on our own knowledge and experience as people who deal with these systems, what’s part of this overarching project of disempowerment by the way that it renders autonomy farther away from us, by the way that it alienates our authority on the subjects of our own expertise." (I would add, though I'm sure Alkhatib would agree with this, that attributing authority to something that "learns from examples" is itself a political and ideological act!)
"Consider doing this same experiment, but, instead of poetry, it's literally anything STEM, and, when our criminally underpaid participants preferred ChatGPT, we gleefully write a paper titled 'AI-generated physics is indistinguishable from human-written physics.' It's unimaginable that such a paper would pass peer-review in any scientific journal."
"Their results say that 3% of participants' actual lives might've been saved by Replika. That means 3% of participants are tethered to life by a venture funded tech company that can simply disappear at any time, or, more likely, decide to charge more money, because they can. How much are you willing to pay to not die tomorrow?"
"[S]tudents [are] creative young people, so they empathize with robbed creators. They want tools that help them, not hinder them. And a lot of them are (rightly) concerned about the environment, so they’re shocked to learn that ChatGPT takes ten times the amount of energy Google does to answer the same question, usually worse." Also the Jared White quote: "You can literally just not use it"
"AI in education can be characterized by ‘critical hype’—forms of critique that implicitly accept what the hype says AI can do, and inadvertently boost the credibility of those promoting it. The risk of both forms of hype is schools assume a very powerful technology exists that they must urgently address, while remaining unaware of its very real limitations, instabilities and faults or the complex ethical problems associated with data-driven technologies in education."
"There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating: 'Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.' But none of the speakers at [the conference] chose to examine the larger context of the [generative AI] tools they were encouraging us to use."
"When a decision making apparatus is non-human and your system's POSIWID function is to extract and dehumanize and murder for profit and power, let's call that what it is: humans with power deciding to harm the world, with some extra steps."
too many good quotes from this, among them: "[T]here are no labor shortcuts for caring, in and of itself, no stretching a little bit of intentionality to provide focused attention across some ever increasing population. Care doesn’t scale; cruelty does. You can’t automate your way around the infinite obligation to the other."
a few data points from a report that is hidden behind a form asking you to donate your e-mail address to a sales department; mostly anecdotes otherwise
"The field I know as 'natural language processing' is hard to find these days.... It's rare to see NLP research that doesn't have a dependency on closed data controlled by OpenAI and Google, two companies that I already despise. [...] [C]ollecting a whole lot of text in a lot of languages... used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it's very likely because they are creating a plagiarism machine that will claim your words as its own." i feel this in my very bones
"Even if these systems were providing value to disadvantaged people, that shouldn’t make them off limits to criticism. Is it classist to call out the shady business practices of companies like Walmart and Dollar General just because many lower income people depend on the low prices they provide?"
a wonderful, thoughtful, and down-to-earth statement on LLM use in educational contexts. "I’m a super straight-laced Mormon and, like, never ever swear or curse, but in this case, the word [bullshit] has a formal philosophical meaning... so it doesn’t count :)" lmao.
i think what the "eventually generative ai will be indistinguishable from human-made things" folks are failing to understand is that not only do the methods of creation leave traces in the media they create, you can't predict beforehand what those traces will be; also, people are really really good at recognizing these traces
"Working at an AI-equipped workplace is like being the parent of a furious toddler who has bought a million Sea Monkey farms off the back page of a comic book"
"These systems exist to facilitate violence, and HCI researchers who have committed their careers to curl back that violence at the margins have considerably more of something in them than I have. I hope it’s patience and determination, and not self-interested greed."
"As a rule, I wouldn't have said that resource burning, hype driving, copyright infringing AI companies tend to be anywhere near the left of the political spectrum. In fact, I'd even go so far as to say that those tend to be quite distinctly right-wing (and libertarian at that) "qualities"."
"[W]hat is currently sold to us as “Artificial Intelligence”... is neither intelligent nor entirely artificial, yet it’s pumping the internet with automated content more quickly than you can fire an editorial office. No system predicated on these assumptions can hope to discern “misinformation” from “information”: both are reduced to equally weighted packets of content, merely seeking an optimization function in a free marketplace of ideas. And both are equally ingested into a great statistical machinery, which weighs only our inability to discern."
"The central claim of the tech companies selling LLMs is that any work that people do that results in text artifacts is just "text in-text out" and can therefore be replaced by their synthetic text-extruding machines. The best response to that claim is not "oh no, we can't keep up" but to take pride in one's work... and push back"
"In other words, some users get the full experience, the one with all the words, all the context, and all the options. But if Nielsen’s AI thinks you have a disability, you’ll get a different experience, a simpler experience that’s more appropriate for people like you. It’s an ugly kind of paternalism with a new AI twist."
"If ensuring quality is your responsibility, and the tool you’re using pushes bad quality your way, you are fighting against gravity in that situation. It’s you versus the forces of entropy."
"... there is little room to doubt that the current implementation of AI Assistants discourages code reuse. Instead of refactoring and working to DRY ('Don't Repeat Yourself') code, these Assistants offer a one-keystroke temptation to repeat existing code."
"I would not trust a large language model to... plan an itinerary in a new city, because I’m not a boring or unimaginative person who lets a cheap piece of plastic tell me to do the ten most common results for 'stuff to do in London.' [...] This latest push for AI is making the world lazier, less curious, harder to navigate, ripping people off, and creating a topic somehow more tiring than that year these people wouldn’t shut the fuck up about NFTs and then never brought it up ever again when the market imploded."
"Even after the lessons, students seemed to feel more confident with a traditional approach than with AI. Most felt low-to-moderate confidence about achieving their writing goals with AI, and even less confidence about how to use AI ethically. We hope with future research to figure out whether this insecurity is due to inexperience or endemic to AI tool use."
"There are a hundred and one reasons to worry about Elsevier mining our scholarship to maximize its profits. I want to linger on what is, arguably, the most important: the potential effects on knowledge itself. At the core of these tools—including a predictable avalanche of as-yet-unannounced products—is a series of verbs: to surface, to rank, to summarize, and to recommend. The object of each verb is us—our scholarship and our behavior. What’s at stake is the kind of knowledge that the models surface, and whose knowledge."
the claim here is that AI systems are 'different,' but it just looks like regular ol' 'building your software on other people's APIs' to me. except the APIs are garbage and they don't work
"We need to ensure that human creators are compensated, not just for the sake of the creators, but so our books and arts continue to reflect both our real and imagined experiences, open our minds, teach us new ways of thinking, and move us forward as a society, rather than rehash old ideas."
"The chatbot’s answers sound extremely specific to the current context but are in fact statistically generic. The mathematical model behind the chatbot delivers a statistically plausible response to the question. The marks that find this convincing get pulled in." this is really good but i wish it approached the topic of psychics with a bit less bro-ey skepticism
"[E]ducational technology is overly dominated by psychological conceptions of individual learning... AI-based personalized learning systems [are] based on notions of mastery and... statistical measurement," reflecting an "assumption that human intelligence is an individual capacity, which can therefore be improved with technical solutions — like tutorbots — rather than something shaped by educational policies and institutions."
"These AI jobs are [the] bizarro twin [of 'bullshit jobs']: work that people want to automate, and often think is already automated, yet still requires a human stand-in." [...] "When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious."
"AI is not a way of representing the world but an intervention that helps to produce the world that it claims to represent. Setting it up one way or another changes what becomes naturalised and what becomes problematised. Who gets to set up the AI becomes a crucial question of power."
"It matters that the first staticky voices we’ve dialed in with our massive, multi-billion-parameter arrays are dreamers, confabulators, and improvisers. It matters that Chess and Go, the sites where we first encountered their older, more serious siblings, are artworks. Artworks carved out of instrumental reason. Artworks that, long before computers existed, were spinning beautiful webs of logic and attention. Art is not a precious treasure in need of protection. Art is a fearsome wellspring of human power from which we will draw the weapons we need to storm the gates of the reality studio and secure the future."
from Joanne McNeil: "... artificial intelligence with personality could become an early twenty-teens kitschy retro artifact. [...]
The Tesla Bot announcement seemed to pre-figure this. The person in robot costume danced to music that sounded reminiscent of Daft Punk’s Tron: Legacy soundtrack, which was released more than ten years ago. This isn’t the future. It’s an aesthetic that’s played out."
"To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on 'ethical AI' is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies. [...] It is strange that Ito, with no formal training, became positioned as an 'expert' on AI ethics, a field that barely existed before 2017. But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby."
"The Transformer is nothing more than an architecture where the core functional unit is attention. You stack attention layers on top of attention layers, just like you would do with CNN or RNN layers."
"As machine learning algorithms are commoditized, those who can work along the entirety of the applied machine learning arc will be the most valuable."