Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.
In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.
In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.
Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.
People sometimes miss that copyleft is powered by copyright. Copyleft (which means Linux, Blender, and plenty of other goodness) needs the ability to impose some rules on what users do with your work, presumably in the interest of common good. Such ability implies IP ownership.
This does not mean that powerful interests abusing copyright with ever increasing terms and enforcement overreach is fair game. It harms common interest.
However, it does mean that abusing copyright from the other side and denouncing the core ideas of IP ownership—which is now sort of in the interest of certain companies (and capital heavily invested in certain fashionable but not yet profitable startups) based around IP expropriation—harms common interest just as well.
While this is a generally true statement (and has echoes in other areas like sovereign citizens), GenAI may make copyright (and copyleft) economically redundant.
While the AI we have now is not good enough to make an entire operating system when asked*, if/when they can, the benefits of all the current licensing models evaporate, and it doesn't matter if that model is proprietary with no source, or GPL, or MIT, because by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.
But as we're not there yet, I agree with @benlivengood that (most**) OSS projects must treat GenAI code as if it's unusable.
* At least, not a modern OS. I've not tried getting any model to output a tiny OS that would fit in a C64, and while I doubt they can currently do this, it is a bet I might lose, whereas I am confident all models would currently fail at e.g. reproducing Windows XP.
** I think MIT licensed projects can probably use GenAI code, they're not trying to require derivatives to follow the same licence, but I'm not a lawyer and this is just my barely informed opinion from reading the licenses.
I have a few sociophilosophical quibbles about the impact of this, but to focus on a practical part:
> by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.
Do you think that the cost of tokens will remain low enough once these companies for now operating at loss have to be profitable, and it really is going to be “anyone else”? Or, would it be limited to “big tech” or select few corporations who can pay a non-trivial amount of money to them?
Do you think it would mean they essentially sell GPL’ed code for proprietary use? Would it not affect FOSS, which has been till now partially powered by the promise to contributors that their (often voluntary) work would remain for public benefit?
Do you think someone would create and make public (and gather so much contributor effort) something on the scale Linux, if they knew that it would be open to be scraped by an intermediary who can sell it at whatever price they choose to set to companies that then are free to call it their own and repackage commercially without contributing back, providing their source or crediting the original authors in any way?
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.
#1 There will be no verifiable way to prove something was AI generated beyond early models.
#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.
#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.
There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)
It's happening slowly all around. It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated. But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.
It's only going to get more pervasive from now on.
> It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated
I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?
> But there are also local tools generated
This is really not the same thing as the original claim ("Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects").
> I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?
As if tech part was the major part of getting the product to market.
Those businesses are probably everywhere. They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design, because a) it's not normal outside some tech startup sphere to brag about how you're improving your internal process, and b) because almost everyone else is doing that too, so it partially cancels out - that is what competition on the market means, and c) admitting to use of AI in current climate is kind of a questionable PR move.
WRT. those who fail to leverage the new tools and are destined to be outcompeted, this process takes extended time, because companies have inertia.
>> But there are also local tools generated
> This is really not the same thing as the original claim
Point is that such wins compound. You get yak shaving done faster by fashioning your own tools on the fly, and it also cuts cost and a huge burden of maintaining relationships with third parties[0]
--
[0] - Because each account you create, each subscription you take, even each online tool you kinda track and hope hope hope won't disappear on you - each such case comes with a cognitive tax of a business relationship you probably didn't want, that often costs you money directly, and that you need to keep track of.
And because from the outside everything looks worse than ever. Worse quality, no more support, established companies going crazy to cut costs. AI slop is replacing thoughtful content across the web. Engineering morale is probably at an all time low for my 20 years watching this industry...
So my question is: if so many people should be bragging to me and celebrating how much better things are, why does it look to me like they are worse and everyone is miserable about it...?
Schroedingers AI. It's everywhere, but you can't point to it cause it's apparently indistinguishable from humans, except for the shitty AI which is just shitty AI.
If it was self-evident then I wouldn’t need to ask for evidence. And I imagine you wouldn’t need to be waving your hands making excuses for the lack of evidence.
This is happening right now and it won’t be obvious until the liquidity events provide enough cover for victory lap story telling.
The very knowledge that an organization is experiencing hyper acceleration due to its successful adoption of AI across the enterprise is proprietary.
There are no HBS case studies about businesses that successfully established and implemented strategic pillars for AI because the pillars were likely written in the past four months.
For some reason these fully functional ai generated projects that the authors vibe out while playing guitar and clipping their toenails are never open source.
Going by the standard of "But there are also local tools generated faster than you could adjust existing tools to do what you want", here's a random one of mine that's in regular use by my wife:
Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro (I forgot to note that down in this project), and recently modified with Claude Code because I had to test it on something.
Getting the first version of this up was literally both faster and easier than finding a QR code generator that I'm sure is not bloated, not bullshit, not loaded with trackers, that's not using shorteners or its own URL (it's always a stupid idea to use URL shorteners you don't control), not showing ads, mining bitcoin and shit, one that my wife can use in her workflow without being distracted too much. Static page, domain I own, a bit of fiddling with LLMs.
What I can't link to is half a dozen single-use tools or faux tools created on the fly as part of working on something. But this happens to me couple times a month.
To anchor another vertex in this parameter space, I found it easier and faster to ask LLM to build me a "breathing timer" (one that counts down N seconds and resets, repeatedly) with analog indicator by requesting it, because a search query to Google/Kagi would be of comparable length, and then I'd have to click on results!
It overlays a trivial UI to set up looping over a segment of any YouTube video, and automatically persists the setting by video ID. It solves the trivial annoyance of channel jingles and other bullshit at start/end of videos that I use repeatedly as background music.
This was mostly done zero-shot by Claude, with maybe two or three requests for corrections/extra features, total development time maybe 15 minutes. I use it every day all the time ever since.
You could say, "but SponsorBlock" or whatever, but per what GP wrote, I just needed a small fraction of functionality of the tools I know exist, and it was trivial to generate that with AI.
I am reminded of a meme about musicians. Not well enough to find it, but it was something like this:
Real musicians don’t mix loops they bought.
Real musicians make their own synth patches.
Real musicians build their own instruments.
Real musicians hand-forge every metal component in their instruments.
…
They say real musicians raise goats for the leather for the drum-skins, but I wouldn't know because I haven’t made any music in months and the goats smell funny.
There's two points here:
1) even though most of people on here know what npm is, many of us are not web developers and don't really know how to turn a random package into a useful webapp.
2) The AI is faster than googling a finished product that already exists, not just as an NPM package, but as a complete website.
Especially because search results require you to go through all the popups everyone stuffs everywhere because cookies, ads, before you even find out if it was actually a scam where the website you went to first doesn't actually do the right thing (or perhaps *anything*) anyway.
> I am reminded of a meme about musicians. Not well enough to find it
You only need to search for “loops goat skin”. You’re butchering the quote and its meaning quite a bit. The widely circulated version is:
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
It’s not about “real musicians”¹ but a personal reflection on dependencies and abstractions and the nature of creative work and remixing. Your interpretation of it is backwards.
> To be clear: this isn't an endorsement of using models for serious Open Source libraries. This was an experiment to see how far I could get with minimal manual effort, and to unstick myself from an annoying blocker. The result is good enough for my immediate use case and I also felt good enough to publish it to PyPI in case someone else has the same problem.
By their own admission, this is just kind of OK. They don’t even know how good or bad it is, just that it kind of solved an immediate problem. That’s not how you create sustainable and reliable software. Which is OK, sometimes you just need to crap something out to do a quick job, but that doesn’t really feel like what your parent comment is talking about.
> the authors vibe out while playing guitar and clipping their toenails
I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.
The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.
> Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.
That is a big assumption. If everyone were doing that, this wouldn’t be a major issue. But as the curl developer has noted, people are using LLMs without thinking and wasting everyone’s time and resources.
I can attest to that. Just the other day I got a bug report, clearly written with the assistance of an LLM, for software which has been stable and used in several places for years. This person, when faced with an error on their first try, instead of pondering “what am I doing wrong” instead opened a bug report with a “fix”. Of course, they were using the software wrong. They did not follow the very short and simple instructions and essentially invented steps (probably suggested by an LLM) that caused the problem.
Waste of time for everyone involved, and one more notch on the road to causing burnout. Some of the worst kind of users are those who think “bug” means “anything which doesn’t immediately behave the way I thought it would”. LLMs empower them, to the detriment of everyone else.
Why would you need to carefully review code? That is so 2024. You’re bottlenecking the process and are at a disadvantage when the AI could be working 24/7. We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.
AI “assistance” is a short intermediate phase, like the “centaurs” that Garry Kasparov was very fond of (human + computer beat both a human and a computer by itself… until the computer-only became better).
> We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.
Was your comment tongue-in-cheek? If not, where is this huge mass of AI-generated software?
All around you, just that it doesn’t make sense for developers to reveal that a lot of their work is now about chunking and refining the specifications written by the product owner.
Admitting such is like admitting you are overpaid for your job, and that a 20 USD AI-agent can do better and faster than you for 75% of the work.
Is it easy to admit that you have learnt skills for 10+ years that are progressively already getting replaced by a machine ? (like thousands of jobs in the past).
More and more, developer is going to be a monkey job where your only task is to make sure there is enough coal in the steam machine.
Compilers destroyed the jobs of developers writing assembler code, they had to adapt. They insisted that hand-written assembler was better.
Here is the same, except you write code in natural language. It may not be optimal in all situations but it often gets the job done.
And not that long ago, the majority of the population believed the Earth is flat, and that cigarettes are good for your health. Radioactive toys were being sold to children.
P = NP is less "crush their little hearts", more "may cause widespread heart attacks across every industry due to cryptography failing, depending on if the polynomial exponent is small enough".
I vibed that workflow just so more people could have access to this tool. It was a pain and it actually took time away from toenail clipping.
And while I didn't lay hands on a guitar much during this period, I did manage to build this while bouncing between playing Civil War tunes on a 3D-printed violin and generating music in Suno for a soundtrack to “Back on That Crust,” the missing and one true spiritual successor to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-4310-b9ef-815bdf742ecb
This app is concatenating files with an extra line of metadata added?
You know this could be done in a few lines of shell script? You can then make it a finder action extension so it’s part of the system file manager app.
Only the simplest one is open (and before you discount it as too trivial, somehow none of the other ones did what I wanted) https://github.com/viraptor/pomodoro
The others are just too specific for me to be useful for anyone else: an android app for automatic processing of some text messages and a work scheduling/prioritising thing. The time to make them generic enough to share would be much longer than creating my specific version in the first place.
> and before you discount it as too trivial, somehow none of the other ones did what I wanted
No offense, it's really great that you are able to make apps that do exactly what you want, but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects" (as someone else suggested above). Complex real world software is different from pomodoro timers and TODO lists.
Cut it out with patronising, I work with complex software, which is why I specifically mentioned the only example I published was simple.
> but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"
Here's the thing though - it's already the case, because I wouldn't create those tools but hand otherwise. I just don't have the time, and they're too personal/edge-case to pay anyone to make them. So the comparison in this case is between 100% human developed non-existent software and AI generated project which exists. The latter wins in every category by default.
I don't think they're being patronizing, it's that "simple personal app that was barely worth making" is nice to have but not at all what they want evidence of.
It doesn't matter that my thing doesn't generalise if someone can build their own customised solution quickly. But also, if I wanted to sell it or distribute it, I'd ensure it was more generic from the beginning.
If you comment about AI generated code in a thread about qemu (mission-critical project that many industries rely upon), a pomodoro app is not going to do the trick.
And no, it doesn't "show that is possible". qemu is not only more complex, it's a whole different problem space.
I'm getting towards the end of a vibe coded ZFS storage backend to ganeti that includes the ability to live migrate VMs to another host by: taking snapshot and replicating it to target, pausing VM, taking another incremental snapshot and replicating it, and then unpausing the VM on the new destination machine. https://github.com/linsomniac/ganeti/tree/newzfs
Other LLM tools I've built this week:
This afternoon I built a web-based SQL query editor/runner with results display, for dev/ops people to run read-only queries against our production database. To replace an existing super simple one, and add query syntax highlighting, snippet library, and other modern features. I can probably release this though I'd need to verify that it won't leak anything. Targets SQL Server.
A couple CLI Jira tools to pull a list of tickets I'm working on (with cache so I can get an immediate response, then get updates after Jira response comes back), and tickets with tags that indicate I have to handle them specially.
An icinga CLI that downtimes hosts, for when we do sweeping machine maintenances like rebooting a VM host with dozens of monitored children.
An Ansible module that is a "swiss army knife" for filesystem manipulation, merging the functions of copy, template, file, so you can loop over a list and: create a directory, template a couple files into it, doing a notify on one and a when on another, ensure a file exists if it doesn't already, to reduce duplication of boilerplate when doing a bunch of file deploys. This I will release as a ansible galaxy module once I have it tested a little more.
None of this seems relevant to the original claim: "Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"
I don't feel like it's meaningful to discuss the "competitiveness" of a handful of bespoke local or internal tools.
I vibe-coded my own MySQL-compatible database that performs better than MariaDB, after my agent optimized it for 12 hours. It is also a time-traveling DB and performs better on all benchmarks and the AI says it is completely byzantine-fault-tolerant. Programmers, you had a nice run. /s
Not sure about parent but you could argue Jetbrains fancy auto complete is AI and generates a substantial portion of code. It runs using a local model and, in my experience, does pretty good at guessing the rest of the line with minimal input (so you could argue 80% of each line was AI generated)
that's like driving big personal vehicles and having a bunch of children and eating a bunch of meat and do nothing about because marine and terrestrial ecosystems weren't fully destroyed by global warming
Ahh, there you go, environmental activists outright saying having children is considered a crime against nature. Wonderful, you seem to hit a rather bad stereotype right on the head. What is next? Earth would be better of if humanity was eradicated?
I feel like this is mostly proofless assertion. I'm aware what you hint at is happening, but the conclusions you arrive at are far from proven or even reasonable at this stage.
For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.
Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.
I think that was his point, the project full of bad developers isn't the competition. It is a peer whose skill matches yours and uses agents on top of that. By myself I am no match for myself + cline.
That’s true in the short term. Longer term it’s questionable as using AI tools heavily means you don’t remember all the details creating a new form of technical debt.
yes, constantly. I also don't remember much contextual domain info of a given section of code about 2 weeks into delving into some other part of the same app.
So-called AI makes this worse.
Let me remind you of gyms, now that humans have been saved of much manual activity...
I think that needs actual testing. At what time distances is there an effect, and how big is it? Even if there is an effect, it could be small enough that a mild productivity boost from AI is more important.
The AI tooling is also really, really good at being able to piece together the code, the contextual domain, the documentation, the tests, the related issues/tickets, it could even take the change history into account, and be able to help refresh your memory of unfamiliar code in the context of bugs or new changes you are looking at making.
Whether or not you go to the gym, you are probably going to want to use an excavator if you are going to dig a basement.
I am of two minds of it having now seen both good coders augmented by AI and bad coders further diminished by it ( I would even argue its worse than stack overflow, because back then they would at least would have had to adjust code a little bit ).
I am personally somewhere in the middle, just good enough to know I am really bad at this so I make sure that I don't contribute to anything that is actually important ( like QEMU ).
But how many people recognize their own strengths and weaknesses? That is part of the problem and now we are proposing that even that modicum of self-regulation ( as flawed as it is ) be removed.
FWIW, I hear you. I also don't have an answer. Just thinking out loud.
Regarding #1, at least in the mainframe/cloud model of hosted LLMs, the operators have a history of model prompts and outputs.
For example, if using Copilot, Microsoft also has every commit ever made if the project is on GitHub.
They could, theoretically, determine what did or didn't come out of their models and was integrated into source trees.
Regarding #2 and #3, with relatively novel software like QEMU that models platforms that other open source software doesn't, LLMs might not be a good fit for contributions. Especially where emulation and hardware accuracy, timing, quirks, errata etc matter.
For example, modeling a new architecture or emulating new hardware might have LLMs generating convincing looking nonsense. Similarly, integrating them with newly added and changing APIs like in kvm might be a poor choice for LLM use.
It seems to me that the point in your first paragraph argues against your points #2 and #3.
If a project allows AI generated contributions, there's a risk that they'll be flooded with low quality contributions that consume human time and resources to review, thus paralyzing the project - it'd be like if you tried to read and reply to every spam email you receive.
So the argument goes that #2 and #3 will not materialize, blanket acceptance of AI contributions will not help projects become more competitive, it will actually slow them down.
Personally I happen to believe that reality will converge somewhere in the middle, you can have a policy which says among other things "be measured in your usage of AI," you can put the emphasis on having contributors do other things like pass unit tests, and if someone gets spammy you can ban them. So I don't think AI is going to paralyze projects but I also think its role in effective software development is a bit narrower than a lot of people currently believe...
I am guessing they don't need people to prove that contributions didn't contain AI code, they just need the contributor to say they didn't use any AI code. That way, if any AI code is found in their contribution the liability lies with the contributor (but IANAL).
Are you familiar with the futures market? It’s all about what you call fantasy ! Similarly, if you are determining the strategy of your organization, all you have to help you is “fantasy”. By the time evidence exists in sufficient quantity your lunch has already been eaten long ago. A good CEO is one that can see where the market is going before anyone else. You may be right that AI is just a fad , but given how much the big companies and all the major startups in the last few years are investing on it, it’s overwhelmingly a fringe position to have at this point.
Both the futures market and resource planning are based on evidential standards (usually). When you make those decisions without any reasoning, you are gambling, and might as well go to the casino.
But notably, FOSS development is neither a corporation or stock trading. It is focused on longevity and maintainability.
> Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects.
There is zero evidence so far that AI improves software developer efficiency.
No, just because you had fun vibing with a chatbot doesn't mean you delivered the end product faster. All of the supposed AI software development gains are entirely self-reported based on "vibes". (Remember these are the same people who claimed massive developer efficiency gains from programming in Haskell or Lisp a few years back.)
Note I'm not even touching on the tech debt issue here, but it is also important.
P.S. The hallucination and counting to five problems will never go away. They are intrinsic to the LLM approach.
That’s not what the policy says, however. You could be the world’s most honest person, using Claude only to generate code you described to it in detail and fully understand, and would still be forbidden.
If AI can generate software so easily and which performs the expected functions, why do we even need to know that it did so? Isn't the future really just asking an AI for a result and getting that result? The AI would be writing all sorts of bespoke code to do the thing we ask, and then discard it immediately after. That is what seems more likely, and not 'so much software we have to figure out rights to'.
> If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust.
Yeah I don’t think so. But if it does then who cares? AI can just make a better QEMU at that point I guess.
They aren’t hurting anyone with this stance (except the AI hype lords), which I’m pretty sure isn’t actually an anti-AI stance, but a pragmatic response to AI slop in its current state.
Is there any likelihood that the output of the model would be public domain? Even if the model itself is public domain, the prompt was created by a human and impacted the output, so I don't see how the output could be public domain. And then after that, the output was hopefully reviewed by the original prompting human and likely reviewed by another human during code review, leading to more human impact on the final code.
Proprietary source code would not usually end up training LLMs. Unless its leaked, how would an LLM have access to it?
> it would require speculative copyright owners to disassemble their binaries
I wonder whether AI might be a useful tool for making that easier.
If you have evidence then you can get courts to order disclosure or examination of code.
> And plenty of proprietary software has public domain code in it already.
I am pretty sure there is a significant amount of proprietary code that has FOSS code in it, against license terms (especially GPL and similar).
A lot of proprietary code is now been written using AIs trained on FOSS code, and companies are open about this. It might open an interesting can of worms.
Given the number of people on HN that say they're using for e.g. Cursor, OpenAI, etc. through work, and my experience with workplaces saying 'absolutely you can't use it', I suspect a large amount is being leaked.
For someone using MIT licensed code for training, it still requires a copy of the license and the copyright notice in "copies or substantial portions of the software". SO I guess its fine for a snippet, but if the AI reproduces too much of it, then its in breach.
From the point of view of someone who does not want their code used by an LLM then using GPL code is more likely to be a breach.
That's a brand new ongoing lawsuit. The ship hasn't sailed in either direction yet. It hasn't even been clearly established if Midjourney has liability let alone where the bounds for such liability might lie.
Remember, anyone can attempt to sue anyone for anything at any time in a functional system. How far the suit makes it is a different matter.
On the contrary. IANAL, but this is my understanding of the law (setting aside the "work for hire" thing for simplicity)
1. If you come up with something completely new, you are the sole copyright holder.
2. If you take someone else's copyrighted work and transform it, then both of you have a copyright on the derivative work.
So if you write a brand new comic book that includes Darth Vader, you can't sell that without Disney's permission [1]: they have a copyright on Darth Vader, and so your comic book is partly copyrighted by them. But at the same time, they can't sell it without your permission, because you have a copyright on the comic book too.
In the case of Midjourney outputs, my understanding of the current state of the law is this:
1. Only humans can create copyrights
2. So if Midjourney creates an entirely new image that's not derivative of anyone else's work (as defined by long-established copyright law on derivative works), then nobody owns the copyright, and it's in the public domain
3. If Midjourney creates an image that is derived from someone else's work (as defined by long established copyright law on derivative works), then only Disney has a copyright on that derivative work.
And so, in theory, Disney could distribute Darth Vader images you made with Midjourney, unless you can convince the court that you had enough creative influence over them to warrant a copyright.
[1] Yes of course fair use, trying to make a point here
Here are cases where the product of AI/ML are not the products of people and not capable of being copyrighted. These are about the OUTPUT being unable to be copyrighted.
If a software is truly wide open source in the sense of “do whatever the fuck you want with this code, we don’t care”, then it has nothing to fear from AI.
Open source is about sharing the source code. You generally need to force companies to share their source code derived from your project, or else companies will simply take it, modify it, and never release their changes,and charge for it too.
Sharing is caring, being forced to share does not foster care.
Companies don't care, so if you release something as open source that's relevant to them, "companies will simply take it, modify it, and never release their changes,and charge for it too" - but that is what companies do, that is their very nature, and you knew that when you first opened the source.
You also knew that when you picked a license, and it's a major reason for the particular choice you made. Want to force companies to share? Pick GPL.
If you decide to yoke a dragon, and it instead snatches your shiny lure and flies away to its cave, you don't get to complain that the dragon isn't playing nice and doesn't want to become your beast of burden. If you picked MIT as your license, that's on you.
Can't release someone else's proprietary source under a "do whatever the fuck you want" license and actually do whatever the fuck you want, without getting sued.
Only more reason for OSS to embrace AI generation - once it leaks into enough widely used or critical (think cURL) dependencies and exceeds certain critical mass, any judgement on the IP aspects other than "public domain" (in the broader sense) will become infeasible, as enforcing a different judgement would be like doing open heart surgery on the global economy.
I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.
I don't want to review code the author doesn't understand
This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.
Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.
This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.
> This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.
This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.
The unfortunate truth is that approach does work, sometimes. It's really easy and common for capable engineers to think their way out of doing something because of all the different things they can think about it.
Sometimes, an unreasonable dumbass whose only authority comes from corporate heirarchy is needed to mandate the engineers start chipping away at the tasks. If they weren't a dumbass, they'd know the unreasonable thing they're mandating, and if they weren't unreasonable, they wouldn't mandate the someone does it.
I am an an engineer. "Sometimes" could be swapped for "rarely" above, but the point still stands: as much frustration as I have towards those people, they do occasionally lead to the impossible being delivered. But then again, a stopped clock -> twice a day etc.
That approach sometimes does work, but usually very poorly and often not at all.
It can work very well when the higher-up is well informed and does have deep technical experience and understanding. Steve Jobs and Elon Musk are great, well-known examples of this. They've also provided great examples of the same approach mostly failing when applied outside of their areas of deep expertise and understanding.
Imagine a boring dystopia where everyone is given hallucinated tasks from LLMs that may in some crazy way be feasible but aren't, and you can't argue that they're impossible without being fired since leadership lacks critical thinking.
I’ve started to experience/see this and it makes me want to scream.
You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.
I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.
A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".
I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.
Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.
The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.
And of course it didn't work.
But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.
I experienced mentioning an issue I was stuck on during standup one day, then some guy on my team DMs me a screenshot of chatGPT with text about how to solve the issue. When I explained to him why the solution he had sent me didn't make sense and wouldn't solve the issue, he sent me back the reply the LLM would give by pasting in my reply, at which point I stopped responding.
I'm just really confused what people who send LLM content to other people think they are achieving? Like if I wanted an LLM response, I would just prompt the LLM myself, instead of doing it indirectly though another person who copy/pastes back and forth.
> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.
A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."
Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.
If it's that simple, sounds like you've got your solution! Go ahead and take care of it. If it fits V&V and other normal procedures, like passing tests and documentation, then we'll merge it in. Shouldn't be a problem for you since it will only take a moment.
People keep asking me if AI is going to take my job and recent experience shows that it very much is not. AI is great for being mostly correct and then giving someone without enough context a mostly correct way to shoot themselves in the foot.
AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.
AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.
Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.
The question about whether it takes jobs away is more whether one programmer with taste can multiply their productivity between ~3-15x and take the same salary while demand for coding remains constant. It's less about whether the tool can directly replace 100% of the functions of a good programmer.
In corporate, you are _forced_ to trust your coworker somehow and swallow it. Specially higher-ups.
In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.
It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.
> Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.
I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.
On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.
On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).
Especially when you try to correct them and they insist AI is the correct one
Sometimes it's fun reverse engineering the directions back into various forum, Stack Overflow, and documentation fragments and pointing out how AI assembled similar things into something incorrect
I have just started adding DCO to _all_ of the open source code that I maintain and will be adding text like this on `CONTRIBUTING.md`:
---
LLM-Generated Contribution Policy
Color is a library full of complex math and subtle decisions (some of them possibly even wrong). It is extremely important that any issues or pull requests be well understood by the submitter and that, especially for pull requests, the developer can attest to the Developer Certificate of Origin for each pull request (see LICENCE).
If LLM assistance is used in writing pull requests, this must be documented in the commit message and pull request. If there is evidence of LLM assistance without such declaration, the pull request will be declined.
Any contribution (bug, feature request, or pull request) that uses unreviewed LLM output will be rejected.
---
I am also adding this to my `SECURITY.md` entries:
---
LLM-Generated Security Report Policy
Absolutely no security reports will be accepted that have been generated by LLM agents.
---
As it's mostly just me, I'm trying to strike a balance, but my preference is against LLM generated contributions.
> any issues or pull requests be well understood by the submitter
I really like this phrasing, particularly in regards to PRs. I think I'll find a way to incorporate this into my projects. Even for smaller, non-critical projects, it's such a distraction to deal with people trying to make "contributions" that they don't clearly understand.
But I refuse to use it as anything more than a fancy autocomplete. If it suggests code that's pretty close to what I was about to type anyway, I accept it.
This ensures that I still understand my code, that there shouldn't be any hallucination derived bugs, [1] and there really shouldn't be any questions about copyright if I was about to type it.
I find using copilot this way speeds me up. Not really because my typing is slow, it's more that I have a habit of getting bored and distracted while typing. Copilot helps me get to the next thinking/debugging part sooner.
My brain really comprehend the idea that anyone would not want to not understand their code. Especially if they are going to submit it as a PR.
And I'm a little annoyed that the existence of such people is resulting in policies that will stop me from using LLMs as autocomplete when submitting to open source projects.
I have tried using copilot in other ways. I'd love for it to be able to do menial refactoring tasks for me. But every-time I experiment, it seems to fall off the rails so fast. Or it just ends up slower than what I could do manually because it has to re-generate all my code instead of just editing it.
[1] Though I find it really interesting that if I'm in the middle of typing a bug, copilot is very happy to autocomplete it in its buggy form. Even when the bug is obvious from local context, like I've typoed a variable name.
That’s how I use it too. I’ve tried to make agent mode work but it ends up taking just as long if not longer than just making the edits myself. And unless you’re very narrowly specific models like sonnet will go off track making changes you never asked for. At least gpt4.1 is pretty lazy I guess.
When I use LLM for coding tasks, it's like "hey please translate this YAML to structs and extract any repeated patterns to re-used variables". It's possible to do this transform with deterministic tools, but AI will do a fine job in 30s and it's trivial to test the new output is identical to the prompt input.
My high-level work is absolutely impossible to delegate to AI, but AI really helps with tedious or low-stakes incidental tasks. The other day I asked Claude Code to wire up some graphs and outlier analysis for some database benchmark result CSVs. Something conceptually easy, but takes a fair bit of time to figure out libraries and get everything hooked up unless you're already an expert at csv processing.
In my experience, AI will not do a fine job of things like this.
If the definition is past any sort of length, it will hallucinate new properties, change the names, etc. It also has a propensity to start skipping bits of the definitions by adding in comments like "/** more like this here **/"
It may work for you for small YAML files, but beware doing this for larger ones.
Worst part about all that is that it looks right to begin with because the start of the definitions will be correct, but there will be mistakes and stuff missing.
I've got a PoC hanging around where I did something similar by throwing an OpenAPI spec at an AI and telling it to generate some typescript classes because I was being lazy and couldn't be bothered to run it through a formal tool.
Took me a while to notice a lot of the definitions had subtle bugs, properties were missing and it had made a bunch of stuff up.
What does "AI" mean? GPT3.5 on a website, or Claude 4 Opus plugged into function calling and a harness of LSP, type checker and tool use? These are not the same, neither in terms of output quality nor in capability space. We need to be more specific about the tools we use when we discuss them. "IDEs are slow to load" wouldn't be a useful statement either.
>I don't want to review code the author doesn't understand
I get that. But the AI tooling when guided by a competent human can generate some pretty competent code, a lot of it can be driven entirely through natural language instructions. And every few months, the tooling is getting significantly more capable.
I'm contemplating what exactly it means to "understand" the code though. In the case of one project I'm working on, it's an (almost) entirely vibe-coded new storage backend to an existing VM orchestration system. I don't know the existing code base. I don't really have the time to have implemented it by hand (or I would have done it a couple years ago).
But, I've set up a test cluster and am running a variety of testing scenarios on the new storage backend. So I understand it from a high level design, and from the testing of it.
As an open source maintainer myself, I can imagine (thankfully I haven't been hit with it myself) how frustrating getting all sorts of low quality LLM "slop" submissions could be. I also understand that I'm going to have to review the code coming in whether or not the author of the submission understands it.
So how, as developers, do we leverage these tools as appropriate, and signal to other developers the level of quality in code. As someone who spent months tracking down subtle bugs in early Linux ZFS ports, I deeply understand that significant testing can trump human authorship and review of every line of code. ;-)
> I'm contemplating what exactly it means to "understand" the code though.
You can't seriously be questioning the meaning of "understand"... That's straight from Jordan B. Peterson's debate playbook which does nothing but devolve the conversation into absurdism, while making the person sound smart.
> I've set up a test cluster and am running a variety of testing scenarios on the new storage backend. So I understand it from a high level design, and from the testing of it.
You understand the system as well as any user could. Your tests only prove that the system works in specific scenarios, which may very well satisfy your requirements, but they absolutely do not prove that you understand how the system works internally, nor that the system is implemented with a reliable degree of accuracy, let alone that it's not misbehaving in subtle ways or that it doesn't have security issues that will only become apparent when exposed to the public. All of this might be acceptable for a tool that you built quickly which is only used by yourself or a few others, but it's far from acceptable for any type of production system.
> As someone who spent months tracking down subtle bugs in early Linux ZFS ports, I deeply understand that significant testing can trump human authorship and review of every line of code.
This doesn't match my (~20y) experience at all. Testing is important, particularly more advanced forms like fuzzing, but it's not a failproof method of surfacing bugs. Tests, like any code, can itself have bugs, it can test the wrong things, setup or mock the environment in ways not representative of real world usage, and most importantly, can only cover a limited amount of real world scenarios. Even in teams that take testing seriously, achieving 100% coverage, even for just statements, is seen as counterproductive and as a fool's errand. Deeply thorough testing as seen in projects like SQLite is practically unheard of. Most programmers I've worked with will often only write happy path tests, if they bother writing any at all.
Which isn't to say that code review is the solution. But a human reviewing the code, building a mental model of how it works and how it's not supposed to work, can often catch issues before the code is even deployed. It is at this point that writing a test is valuable, so that that specific scenario is cemented in the checks for the software, and regressions can be avoided.
So I wouldn't say that testing "trumps" reviews, but that it's not a reliable way of detecting bugs, and that both methods should ideally be used together.
This to me is interesting when it comes to free software projects; sure there are a lot of people contributing as their day job. But if you contribute or manage a project for the pleasure of it, things which undermine your enjoyment - cleaning up AI slop - are absolutely a thing to say "fuck off" over.
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol
Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.
I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
By far the largest review-effort PRs of my career have been in the past year, due to mid-sized LLM-built features. Multiple rounds of other signoffs saying "lgtm" with only minor style comments only for me to finally read it and see that no, it is not even remotely acceptable and we have several uses built by the same team that would fail immediately if it was merged, to say nothing of the thousands of other users that might also be affected. Stuff the reviewers have experience with and didn't think about because they got stuck in the "looks plausible" rut, rather than "is correct".
So it goes back for changes. It returns the next day with complete rewrites of large chunks. More "lgtm" from others. More incredibly obvious flaws, race conditions, the works.
And then round three repeats mistakes that came up in round one, because LLMs don't learn.
This is not a future style of work that I look forward to participating in.
I think the issue is with people taking mental shortcuts and thus no longer properly thinking about design decisions and the bigger picture in terms of concepts of the software.
It also needs proper guideline enforcement. If an engineer produces poorly tested and unreviewed code, then the buck stops with them. This is a human problem more than it is a tool problem.
I'm not really in the field any longer, but one of my favorite things to do with LLMs is ask for code reviews. I usually end up learning something new. And a good 30-50% of the suggestions are useful. Which actually isn't skillful enough to give it a title of "code reviewer", so I certainly wouldn't foist the suggestions on someone else.
funny enough I had coworkers who similarly had a hold of the jargon but without any substance. They would always turn out to be time sinks for others doing the useful work. AI imitating that type of drag on the workplace is kinda funny ngl.
Probabilistic patterns stringed together are something different from an end-to-end intention driven solidly linked chain of thought that is with pylons grounded in relevant context at critical points.
Yep 100%, it is something I have also observed. Frankly has been frustrating to the point I spun up a quick one off html site to rant/get my thoughts out. https://jaysthoughts.com/aithoughts1
> Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions.
Yep, and it's not just code. Student essays, funding applications, internal reports, fiction, art...everything that AI touches has this problem that AI outputs look superficially similar to the work of experts.
I have learned over time that the actually smart people worth listening to, avoid jargon beyond what is strictly necessary, talk in simple terms with specific goals/improvements/changes in mind.
If I'm having to reread something over and over to understand what they're even trying to accomplish, odds are it's either AI generated or an attempt at sounding smart instead of being constructive.
Trajectory so far has been that AI outputs are converging increasingly not just in superficial similarity but also quality of expert output. We are obviously not there yet, and some might say we never will. But if we do, there is a whole new conversation to be had.
I suspect that there are at least 1 or 2 more significant discoveries in terms of architecture and general way of models working, before these things become actual experts. Maybe they will never get there and we will discover how to better incorporate facts and reasoning, rather than just ingesting billions of training data points.
Looks like your blog post got submitted here and then I assume triggered the flame war flag. A lot of people just reading the title and knee jerking in the comments:
I suppose I did bring that on myself with the title didn't I. I believe I have fixed the site for mobile so hopefully some of those thread complaints have been rectified.
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.
I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.
Most hypervisors are closed source and some are developed by litigious companies.
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.
I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.
So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.
While LLMs are really good at generating content, one of their key weaknesses is their (relative) inability to detect _missing_ content.
I'd argue that the most impactful software security bugs in the last couple of decades (Heartbleed etc) have been errors of omission, rather than errors of inclusion.
This means LLMs are:
1) producing lots more code to be audited
2) poor at auditing that code for the most impactful class of bugs
I'd doubt this very much - LLMs hallucinate API calls and commit all sorts of subtle errors that you need to catch (esp. if you're on proprietary problems which it's not trained on).
It's a good replacement for Google, but probably nothing close to what it's being hyped out to be by the capital allocators.
Possibly, but QEMU is such a critical piece software in our industry. Its application stretches from one end to the other - desktop VM, cloud/remote instance, build server, security sandbox, cross-platform environment, etc. Even a small legal risk can hurt the industry pretty badly.
The policy is concise and well bounded. It seems to me to assert that you cannot safely assign attribution of authorship of software code that you think was generated algorithmically.
I use the term algorithmic because I think it is stronger than "AI lol". I note they use terms like AI code generator in the policy, which might be just as strong but looks to me as unlikely to becoming a useful legal term (its hardly "a man on the Clapham omnibus").
They finish with this, rather reasonable flourish:
"The policy we set now must be for today, and be open to revision. It's
best to start strict and safe, then relax."
No doubt they do get a load of slop but they seem to want to close the legal angles down first and attribution seems a fair place to start off. This play book looks way better than curl's.
This could honestly break open source, with how quickly you can generate bullshit, and how long it takes to review and reject it. I can imagine more projects going the way of Android where you can download the source, but realistically you can't contribute as a random outsider.
I have an online acquaintance that maintains a very small and not widely used open-source project and the amount of (what we assume to be) automated AI submissions* they have to wade through is kinda wild given the very small number of contributors and users the thing has. It's gotta be clogging up these big projects like a DDoS attack.
*"Automated" as in bots and "AI submissions" as in ai-generated code
For many projects you realistically can't contribute as a random outsider anyway, simply because of the effort involved in grokking enough of the existing architecture to figure out where to make changes.
Historically the opposite of quality contributions has been no contributions, not net-negative contributions (random slop that costs more in review than it provides benefit).
i mean they say the policy is open for revision and it's also possible to make exceptions; if it's an excuse, they are going out of their way to let people down easy
I'm not sure which way AI would move the dial when it comes to the median submission. Humans can, and do, make some crap code.
If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.
Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.
> If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.
> Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.
I thought that this could be an opportunity for volunteers who can't dedicate the time to learn a codebase thoroughly enough to be a regular committer. They just have to evaluate a patch to see if it meets a threshold of quality where they can pass it on to someone who does know the codebase well.
The barrier to being able to do a first commit on any project is usually quite high, there are plenty of people who would like to contribute to projects but cannnot dedicate the time n effort to pass that initial threshold. This might allow people an ability to contribute at a lower level while gently introducing them to the codebase where perhaps they might become a regular contributer in the future.
It is interesting to read the pro-AI rant in the comments on the linked commit. The person who is threatening to use "AI" anyway has almost no contributions either in qemu or on GitHub in general.
This is the target group for code generators. All talk but no projects.
I'd hope there could be some distinction between using LLM as a super autocomplete in your IDE, vs giving it high-level guidelines and making it generate substantive code. It's a gray area, sure, but if I made a contribution I'd want to be able to use the labor-saving feature of Copilot, say, without danger of it copying an algorithm from open source code. For example, today I generated a series of case statements and Copilot detected the pattern and saved me tons of typing.
That and also just AI glasses that become an extension of my mind and body, just giving me clues and guidance on everything I do including what's on my screen.
I see those glasses as becoming just a part of me, just like my current dumb glasses are a part of me that enables me to see better, the smart glasses will help me to see AND think better.
My brain was trained on a lot of proprietary code as well, the copyright issues around AI models are pointless western NIMBY thinking and will lead to the downfall of western civilization if they keep pursuing legal what-ifs as an excuse to reject awesome technology.
This seems absolutely impossible to enforce. All my editors give me AI assisted code hints. Zed, cursor, VS code. All of them now show me autocomplete that comes from an LLM. There's absolutely no distinction between that code, and code that I've typed out myself.
It's like complaining that I may have no legal right to submit my stick figure because I potentially copied it from the drawing of another stick figure.
I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway. There's no way the people that write these things aren't aware they're completely unenforceable.
> I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway.
Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:
[...] More broadly there is,
as yet, no broad consensus on the licensing implications of code
generators trained on inputs under a wide variety of licenses
And in the patch itself:
[...] With AI
content generators, the copyright and license status of the output is
ill-defined with no generally accepted, settled legal foundation.
What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.
It’s like the seemingly-confusing gates passing through customs that say “nothing to declare” when you’ve already made your declarations. Walking through that gate is a conscious act that places culpability on you, so you can’t simply say “oh, I forgot” or something.
The thinking here is probably similar: if AI-generated code becomes poisonous and is detected in a project, the DCO could allow shedding liability onto the contributor that said it wasn’t AI-generated.
> Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message
Don’t be ridiculous. The majority of people are in fact honest, and won’t submit such code; the major effect of the policy is to prevent those contributions.
Then you get plausible deniability for code submitted by villains, sure, but I’d like to hope that’s rare.
Neovim doesn't force you to use AI, unless you configure it yourself. If your editor doesn't allow you to switch it off, there must be a big problem with it.
I understand where this comes from but I think it's a mistake. I agree it would be nice if there were "well settled law" regarding AI and copyright, probably relatively few rulings and next to zero legislation on which to base their feelings.
In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."
What's the risk of not doing this? Better code but slower velocity for an open source project?
I think that particular brand of risk makes sense for this particular project, and the authors don't seem particularly negative toward GenAI as a concept, just going through a "one way door" with it.
It's a simpler solution is just to wait until legal situation is clearer.
QEMU is (mostly) GPL 2.0 licensed, meaning (most) code contributions need to be GPL 2.0 compatible [0]. Let's say, hypothetically, there's a code contribution added by some patch involving gen AI code which is derived/memorised/copied from non-GPL compatible code [1]. Then, hypothetically, a legal case sets precedent that gen AI FOSS code must re-apply the license of the original derived/memorised/copied code. QEMU maintainers would probably need to roll back all those incompatible code contributions. After some time, those code contributions could have ended up with downstream callers which also need to be rewritten (even in CI code).
It might be possible to first say "only CI code which is clearly labelled as 'DO NOT RE-USE: AI' or some such". But the maintainers would still need to go through and rewrite those parts of the CI code if this hypothetical plays out. Plus it adds extra work to reviews and merge processes etc.
it's just less work and less drama for everyone involved to say "no thank you (for now)".
----
caveat: IANAL, and licensing is not my specific expertise (but i would quite like it to be one day)
This isn't like some other legal questions that go decades before being answered in court. There are dozens of cases working through the courts today that will shed light on some aspects of the copyright questions within a few years. QEMU has made great progress over the last 22 years without the aid of AI, waiting a few more years isn't going to hurt them.
I think you need to read between the lines here. Anything you do is a legal risk, but this particular risk seems acceptable to many of the world's largest and richest companies. QEMU isn't special, so if they're taking this position, it's most likely simply because they don't want to deal with LLM-generated code for some other reason, are eager to use legal risk as a cover to avoid endless arguments on mailing lists.
We do that in corporate environments too. "I don't like this" -> "let me see what lawyers say" -> "a-ha, you can't do it because legal says it's a risk".
There is a well settled practice in computing that you just don't plagiarize code. Even a small snippet. Even if copyright law would consider such a small thing "fair use".
In the first place, in order to post to StackOverflow, you are required to have the copyright over the code, and be able to grant them a perpetual license.
Obviously I cannot show the code base, but when I pick a pre-existing solution from Stackoverflow or elsewhere—though it is quite rare—I do add a comment linking to the source: after all, in case of SA the discussion there might be interesting for the future maintainers of the function.
I just checked, though, and the code base I'm now working with has eight stackoverflow links. Not all are even written by me, according to quick check with git blame and git log -S..
This isn't 100% true meaning it isn't well settled. Have people already forgotten Google vs Oracle? Google ended up winning that after years and years but the judgements went back and forth and there are around 4 or 5 guidelines to determine whether something is or isn't fair use and generative AI would fail at a few of those.
Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust. Oracle wanted the interface itself to be copyrighted so that even if someone reproduced the API from a description of it, it would infringe. The implication being that components which clone an API would be infringing, even though their implementation is original, discouraging competitors from making API-compatible components.
My comment didn't say anything about the output of AI being fair use or not, rather that fair use (no matter where you are getting material from) ipso facto doesn't mean that copy paste is considered okay.
Every employer I ever had discouraged copy and paste from anywhere as a blanket rule.
At least, that had been the norm, before the LLM takeover. Obviously, organizations that use AI now for writing code are plagiarizing left and right.
> Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust.
In addition to the Structure, Sequence and Organization claims, the original filing included a claim for copyright violation on 9 identical lines of code in rangeCheck(). This claim was dropped after the judge asked Oracle to reduce the number of claims, which forced Oracle to pare down to their strongest claims.
I've been trying out Claude Code (the tool I've found most effective in terms of agentic code gen/manipulation) for an emulator project of mine for the last few days. Part of it is a compiler from an architecture definition to disassembler/interpreter/recompiler. I hit a fairly minor compiler bug and decided to ask Claude to debug and fix it. Some things I noted:
1. My C# code compiled just fine and ran even, but it was convinced that I was missing a closing brace on a lambda near where the exception was occurring. The diff was ... Putting the existing brace on a new line. Confidently stated that was the problem and declared it fixed.
2. It did figure out that an unexpected type was being seen, and implemented a pathway that allowed for it to get to the next error, but didn't look into why that type had gotten there; that was the actual bug, not the unhandled type. So it "fixed" it, but just kicked the can down the road.
3. When figuring out the issue, it just looked at the stack trace. That was it. It was running the compiler itself; it could've just embedded some debug code (like I did) and work out what the actual issue was, but it didn't even try. The exception was just a NotSupportedException with no extra details to work off of, so adding just a crumb of context would let you solve the issue.
Now, is this the simplest emulator you could throw AI at? No, not at all. But neither is qemu. I'm thoroughly unconvinced that current tools could provide real value on codebases like these. I'm bullish on them for the future, and I use GenAI constantly, but this ain't a viable use case today.
"Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>"
I am just wondering how do we differentiate between AI generated code and human written code that is influenced or copied from some unknown source. The same licensing problem may happen with human code as well especially for OSS where anyone can contribute.
Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
> Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
It’s a power saw. A really powerful tool that can be dangerous if used improperly. In that sense the code generator can have more or less of a mind of its own depending on the wielder.
Ok I think I’ve stretched the analogy to the breaking point…
there's no audit trail for how most code gets shaped anyway
we're teammate's intuition from a past outage
a one-liner from some old jira ticket
even the shape of a func pulled from habit
none of that is reviewable but still it gets trusted lol
ai moves faster than group consensus
this ban won't slow down the tech
it'll may make paradigms like qemu harder to enter
harder to scale, harder to test thru properly
so if we maintain code like this
we gotta know the trade we're making
we're preserving trust but limiting throughput
maybe fine idk but don't confuse it as future proofing
i kinda feel it does exposes trust in oss is social not epistemic.
we accept complex things if we know who dropped it
and we reject clean things if it smells synthetic
so the real qn isn't
> did we use ai?
it's
> can we even maintain this in 6mo?
and if the answer's yes
doesn't really matter who produced the code fr
Signed by mostly people at RedHat, which is owned by IBM, which makes Watson, which beat humans in Jeopardy in 2011.
> These are early days of AI-assisted software development.
Are they? Or is this just IBM destroying another acquisition slowly.
Meanwhile the Dotnet Runtime is fully embracing AI. Which people on the outside may laugh at but you have extremely talented engineers like Stephen Toub and David Fowler advocating for it.
So enterprises: next time you have an IBM rep trying to sell you AI services, do yourself a favor and go to any other number of companies out there who are actually serious about helping you build for the future.
And since I am a North Carolina native, here’s to hoping IBM and RedHat get their stuff together.
>> The tools will mature, and we can expect some to become safely usable in free software projects.
It should be possible to build a useful AI code generator for a given programming language solely from the source code for the language itself. Doing so however would require some maturity.
Using AI code generators. I have been able to get the code base large enough that it was starting to make nonsense changes.
However, my overall experience I have been thinking about how this is going to be a massive boon to open source. So many patches, so many new tools will be created to streamline getting new packages into repos. Everything can be tested.
Open source is going to be epicly boosted now.
QEMU deciding to sit out from this acceleration is crazy to me, but probably what is going to give Xen/Docker/Podman the lead.
Including prompts would create transparency but still wouldn't resolve the underlying copyright uncertainty of the output or guarantee the code wasn't trained on incompatibly-licensed material.
You’d need to hash the model weights and save the seeds for the temperature prng as well, in order to verify the provenance. Ideally it would be reproducible, right?
It would need to be more than that. A prompt for one model can have different results vs another. Even when the model has different treatment for inference, eg quantization, the same prompt for the unquantized and quantized model could differ.
One of several reasons to use an open model even if it isn't quite as good. Version control the models and commit the prompts with the model name and a hash of the parameters. I'm not really sure what value that reproducibility adds though.
I'm interested to see how this plays out. I'd like a similar policy for my projects, but also a similar policy/T&C that prohibits the crawling of the content too.
Only way to prohibit crawling is to go back to invite only, probably self-hosted repositories. These companies have no shame, your T&Cs won't mean anything to them and you have no way of proving they violated them without some kind of discovery into their training data.
This is a "BlockBuster laughs Netflix out of the room" moment. I am a huge fan of QEMU and used it throughout my career. The maintainers have every right to govern their project as they see fit. But this is a lot of mental gymnastics to justify clinging to punchcards in a world where we now have magnetic tape and keyboards to do things faster. This tech didn't spawn weeks ago. Every major project has had at least two years to prepare for this moment.
2 years isn’t that long. It took the Linux kernel 10 years to start accepting code written in Rust. This isn’t quite the same as the typical frontend flavor-of-the week JavaScript library.
> This is a "BlockBuster laughs Netflix out of the room" moment
I'm not sure that's the dunk you think it is. Good for Netflix for making money, but we're drowning in their empty slop content now and worse off for it.
Who is forcing you to watch slop? And mind you, there was a TON of garbage at any local Blockbuster back in the day, with the added joy of having to go somewhere to rent it, being slapped with late and rewind fees or not even have availability of what you want to watch.
Choice is good. It means more slop, but also more gold. Figure out how to find the gold.
You're so dramatic. Like they said in the declaration, these are the early days of AI development and all the problems they mention will be eventually resolved so they have no problem taking a backseat while things sort themselves out and I respect that choice.
When enough people don't want to do it anymore. Feel free to step up, live with email patches, and add to the numbers of those who don't like it and say so.
Why is it archaic if it works? I get there might be other ways to do patch sharing and discussion but what exactly is your problem with email as a transport?
You might as well describe voice and ears as archaic!
Sending patches over email is basically a filter for slop. Stops the low effort drive by PRs and anyone who actually wants to invest some time in to contributing won't have a problem working out the workflow.
So essentially it’s “let us cover ourselves by saying it’s not allowed” and in practice that means not allowing code that a human thinks is AI generated code.
Universities have this issue too, despite many offering students and staff Grammarly (Gen AI) while also trying to ban Gen AI.
Sounds like a good idea to ensure developers are owning the code they submit rather than hiding behind "I don't know why it does that, ChatGPT wrote it".
Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.
> Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.
The actual policy is "don't use AI code generators"; don't try to weasel that into "use it if you want to, but if the person on the other side can tell". That's effectively "it's only cheating if you get caught".
By way of analogy, Open Source projects also typically have policies (whether written or unwritten) that you only submit code you are legally allowed to submit. In theory, you could take a pile of proprietary reverse-engineered code that you have no license to, or a pile of code from another project that you aren't respecting the license of, and submit it anyway, and slap a `Signed-off-by` on it. Nothing will physically stop you, and people might not be able to tell. That doesn't make it OK.
The way I interpret it is that if you brainstorm using ChatGPT but write your own code using the ideas created in this step that would be fine, the reviewer wouldn't suspect the code of being AI generated because you've made sure it fits in with the project and actually works. The exact wording here is that they will reject changes they suspect of being AI generated, not that you can't have read anything AI generated in the process.
Getting AI to remind you of the libraries API is a fair bit different to having it generate 1000 lines of code you have hardly read before submitting.
Well I guess the key difference is code is deterministic, that is whether an paper accomplishes it's goals is somewhat subjective but with code its an absolute certainty.
I'm sure that if a contributor working on a feature used cursor to initially generate the code but then goes over it to ensure it's working as expected that would be allowed, this is more for those folks that just want to jam in a quick vibe-coded PR so they can add "contributed to the QEMU project" on their resumes.
The rules regarding the origin of code contributions are rather strict, that is, you can't contribute other people code unless you can make sure that the licence is appropriate. A LLM may output a copy of someone else code, sometimes verbatim, without giving you its origin, so you can't contribute code written by a LLM.
AI generated code is generally pretty good and incredibly fast.
Seeing this new phenomenon must be difficult for those people who have spent a long time perfecting their craft. Essentially, they might feel that their skillsets are being undermined. It would be especially hard for people who associate a lot of their self-identity with their job.
Being a purist is noble, but I think that this stance is foolish. Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.
Yes the reasoning behind the decision is clear and as you described. But I would also make the point that the decision also comes with certain consequences, to which a discussion about merits is directly relevant.
We're talking about a decision that the people behind QEMU made that affects people, to which the consequences of made the discussion of merits "directly relevant".
If we're talking about something that neither involving QEMU nor the people behind it, where is the relevance? It's just a rant on AI at that point.
I wish people would make distinction regarding the size/scope of the AI-generated parts. Like with video copyright laws, where a 5-second clip from a copyrighted movie is usually considered fair use and not frowned upon.
Because for projects like QEMU, current AI models can actually do mind-boggling stuff. You can give it a PDF describing an instruction set, and it will generate you wrapper classes for emulating particular instructions. Then you can give it one class like this and a few paragraphs from the datasheet, and it will spit out unit tests checking that your class works as the CPU vendor describes.
Like, you can get from 0% to 100% test coverage several orders of magnitude faster than doing it by hand. Or refactoring, where you want to add support for a particular memory virtualization trick, and you need to update 100 instruction classes based on straight-forward, but not 100% formal rule. A human developer would be pulling their hairs out, while an LLM will do it faster than you can get a coffee.
Not all jurisdictions are the US, and not all jurisdictions allow fair use, but instead have specific fair dealing laws. Not all jurisdictions have fair dealing laws, meaning that every use has to be cleared.
There are simple algorithms that everyone will implement the same way down to the variable names, but aside from those fairly rare exceptions, there's no "maximum number of lines" metric to describe how much code is "fair use" regardless of the licence of the code "fair use"d in your scenario.
Depending on the context, even in the US that 5-second clip would not pass fair use doctrine muster. If I made a new film cut entirely from five second clips of different movies and tried a fair use doctrine defence, I would likely never see the outside of a courtroom for the rest of my life. If I tried to do so with licensing, I would probably pay more than it cost to make all those movies.
Look up the decisions over the last two decades over sampling (there are albums from the late 80s and 90s — when sampling was relatively new — which will never see another pressing or release because of these decisions). The musicians and producers who chose the samples thought they would be covered by fair use.
Qemu can make the choice to stay in the "stone age" if they want. Contributors who prefer AI assistance can spend their time elsewhere.
It might actually be prudent for some (perhaps many foundational) OSS projects to reject AI until the full legal case law precedent has been established. If they begin taking contributions and we find out later that courts find this is in violation of some third party's copyright (as shocking as that outcome may seem), that puts these projects in jeopardy. And they certainly do not have the funding or bandwidth to avoid litigation. Or to handle a complete rollback to pre-AI background states.
Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.
In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.
In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.
Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.
People sometimes miss that copyleft is powered by copyright. Copyleft (which means Linux, Blender, and plenty of other goodness) needs the ability to impose some rules on what users do with your work, presumably in the interest of common good. Such ability implies IP ownership.
This does not mean that powerful interests abusing copyright with ever increasing terms and enforcement overreach is fair game. It harms common interest.
However, it does mean that abusing copyright from the other side and denouncing the core ideas of IP ownership—which is now sort of in the interest of certain companies (and capital heavily invested in certain fashionable but not yet profitable startups) based around IP expropriation—harms common interest just as well.
While this is a generally true statement (and has echoes in other areas like sovereign citizens), GenAI may make copyright (and copyleft) economically redundant.
While the AI we have now is not good enough to make an entire operating system when asked*, if/when they can, the benefits of all the current licensing models evaporate, and it doesn't matter if that model is proprietary with no source, or GPL, or MIT, because by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.
But as we're not there yet, I agree with @benlivengood that (most**) OSS projects must treat GenAI code as if it's unusable.
* At least, not a modern OS. I've not tried getting any model to output a tiny OS that would fit in a C64, and while I doubt they can currently do this, it is a bet I might lose, whereas I am confident all models would currently fail at e.g. reproducing Windows XP.
** I think MIT licensed projects can probably use GenAI code, they're not trying to require derivatives to follow the same licence, but I'm not a lawyer and this is just my barely informed opinion from reading the licenses.
I have a few sociophilosophical quibbles about the impact of this, but to focus on a practical part:
> by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.
Do you think that the cost of tokens will remain low enough once these companies for now operating at loss have to be profitable, and it really is going to be “anyone else”? Or, would it be limited to “big tech” or select few corporations who can pay a non-trivial amount of money to them?
Do you think it would mean they essentially sell GPL’ed code for proprietary use? Would it not affect FOSS, which has been till now partially powered by the promise to contributors that their (often voluntary) work would remain for public benefit?
Do you think someone would create and make public (and gather so much contributor effort) something on the scale Linux, if they knew that it would be open to be scraped by an intermediary who can sell it at whatever price they choose to set to companies that then are free to call it their own and repackage commercially without contributing back, providing their source or crediting the original authors in any way?
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.
#1 There will be no verifiable way to prove something was AI generated beyond early models.
#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.
#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.
There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects
Still waiting to see evidence of AI-driven projects eating the lunch of "traditional" projects.
It's happening slowly all around. It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated. But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.
It's only going to get more pervasive from now on.
> It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated
I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?
> But there are also local tools generated
This is really not the same thing as the original claim ("Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects").
> I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?
As if tech part was the major part of getting the product to market.
Those businesses are probably everywhere. They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design, because a) it's not normal outside some tech startup sphere to brag about how you're improving your internal process, and b) because almost everyone else is doing that too, so it partially cancels out - that is what competition on the market means, and c) admitting to use of AI in current climate is kind of a questionable PR move.
WRT. those who fail to leverage the new tools and are destined to be outcompeted, this process takes extended time, because companies have inertia.
>> But there are also local tools generated
> This is really not the same thing as the original claim
Point is that such wins compound. You get yak shaving done faster by fashioning your own tools on the fly, and it also cuts cost and a huge burden of maintaining relationships with third parties[0]
--
[0] - Because each account you create, each subscription you take, even each online tool you kinda track and hope hope hope won't disappear on you - each such case comes with a cognitive tax of a business relationship you probably didn't want, that often costs you money directly, and that you need to keep track of.
> They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design
Sure… they'd hate to get money thrown at them from investors.
And because from the outside everything looks worse than ever. Worse quality, no more support, established companies going crazy to cut costs. AI slop is replacing thoughtful content across the web. Engineering morale is probably at an all time low for my 20 years watching this industry...
So my question is: if so many people should be bragging to me and celebrating how much better things are, why does it look to me like they are worse and everyone is miserable about it...?
> Those businesses are probably everywhere. They just aren't open about admitting
"Where's the evidence?" "Probably everywhere."
OK, good luck, have fun
Yup. Or, "Just look around!".
Schroedingers AI. It's everywhere, but you can't point to it cause it's apparently indistinguishable from humans, except for the shitty AI which is just shitty AI.
It's a thought terminating cliche.
If it was self-evident then I wouldn’t need to ask for evidence. And I imagine you wouldn’t need to be waving your hands making excuses for the lack of evidence.
This is happening right now and it won’t be obvious until the liquidity events provide enough cover for victory lap story telling.
The very knowledge that an organization is experiencing hyper acceleration due to its successful adoption of AI across the enterprise is proprietary.
There are no HBS case studies about businesses that successfully established and implemented strategic pillars for AI because the pillars were likely written in the past four months.
> This is happening right now and it won’t be obvious until
I asked for evidence and, as always, lots of people are popping out of the woodwork to swear that it's true but I can't see the evidence yet.
OK, then. Good luck with that.
Can you show these 3 things to us?
For some reason these fully functional ai generated projects that the authors vibe out while playing guitar and clipping their toenails are never open source.
Going by the standard of "But there are also local tools generated faster than you could adjust existing tools to do what you want", here's a random one of mine that's in regular use by my wife:
https://github.com/TeMPOraL/qr-code-generator
Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro (I forgot to note that down in this project), and recently modified with Claude Code because I had to test it on something.
Getting the first version of this up was literally both faster and easier than finding a QR code generator that I'm sure is not bloated, not bullshit, not loaded with trackers, that's not using shorteners or its own URL (it's always a stupid idea to use URL shorteners you don't control), not showing ads, mining bitcoin and shit, one that my wife can use in her workflow without being distracted too much. Static page, domain I own, a bit of fiddling with LLMs.
What I can't link to is half a dozen single-use tools or faux tools created on the fly as part of working on something. But this happens to me couple times a month.
To anchor another vertex in this parameter space, I found it easier and faster to ask LLM to build me a "breathing timer" (one that counts down N seconds and resets, repeatedly) with analog indicator by requesting it, because a search query to Google/Kagi would be of comparable length, and then I'd have to click on results!
EDIT: Okay, another example:
https://github.com/TeMPOraL/tampermonkey-scripts/blob/master...
It overlays a trivial UI to set up looping over a segment of any YouTube video, and automatically persists the setting by video ID. It solves the trivial annoyance of channel jingles and other bullshit at start/end of videos that I use repeatedly as background music.
This was mostly done zero-shot by Claude, with maybe two or three requests for corrections/extra features, total development time maybe 15 minutes. I use it every day all the time ever since.
You could say, "but SponsorBlock" or whatever, but per what GP wrote, I just needed a small fraction of functionality of the tools I know exist, and it was trivial to generate that with AI.
Your QR generator is actually a project written by humans repackaged:
https://github.com/neocotic/qrious
All the hard work was made by humans.
I can do `npm install` without having to pay for AI, thanks.
I am reminded of a meme about musicians. Not well enough to find it, but it was something like this:
There's two points here:1) even though most of people on here know what npm is, many of us are not web developers and don't really know how to turn a random package into a useful webapp.
2) The AI is faster than googling a finished product that already exists, not just as an NPM package, but as a complete website.
Especially because search results require you to go through all the popups everyone stuffs everywhere because cookies, ads, before you even find out if it was actually a scam where the website you went to first doesn't actually do the right thing (or perhaps *anything*) anyway.
It is also, for many of us, the same price: free.
> I am reminded of a meme about musicians. Not well enough to find it
You only need to search for “loops goat skin”. You’re butchering the quote and its meaning quite a bit. The widely circulated version is:
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
It’s not about “real musicians”¹ but a personal reflection on dependencies and abstractions and the nature of creative work and remixing. Your interpretation of it is backwards.
¹ https://en.wikipedia.org/wiki/No_true_Scotsman
Ice Ice Baby getting the bass riff of Under Pressure is sampling. Making a cover is covering. Milli Vanilli is another completely different situation.
I am sorry, none of your points are made. Makes no sense.
The LLM work sounds dumb, and the suggestion that it made "a qr code generator" is disingenuous. The LLM barely did a frontend for it. Barely.
Regarding the "free" price, read the comment I replied on again:
> Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro
Paid tools.
It sounds like the author payed for `npm install`, and thinks he's on top of things and being smart.
Here's Armin Ronacher describing his open-source "sloppy XML" parser that he had AI write with his guidance from this week: https://lucumr.pocoo.org/2025/6/21/my-first-ai-library/
> To be clear: this isn't an endorsement of using models for serious Open Source libraries. This was an experiment to see how far I could get with minimal manual effort, and to unstick myself from an annoying blocker. The result is good enough for my immediate use case and I also felt good enough to publish it to PyPI in case someone else has the same problem.
By their own admission, this is just kind of OK. They don’t even know how good or bad it is, just that it kind of solved an immediate problem. That’s not how you create sustainable and reliable software. Which is OK, sometimes you just need to crap something out to do a quick job, but that doesn’t really feel like what your parent comment is talking about.
> the authors vibe out while playing guitar and clipping their toenails
I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.
The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.
> Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.
That is a big assumption. If everyone were doing that, this wouldn’t be a major issue. But as the curl developer has noted, people are using LLMs without thinking and wasting everyone’s time and resources.
https://www.linkedin.com/posts/danielstenberg_hackerone-curl...
I can attest to that. Just the other day I got a bug report, clearly written with the assistance of an LLM, for software which has been stable and used in several places for years. This person, when faced with an error on their first try, instead of pondering “what am I doing wrong” instead opened a bug report with a “fix”. Of course, they were using the software wrong. They did not follow the very short and simple instructions and essentially invented steps (probably suggested by an LLM) that caused the problem.
Waste of time for everyone involved, and one more notch on the road to causing burnout. Some of the worst kind of users are those who think “bug” means “anything which doesn’t immediately behave the way I thought it would”. LLMs empower them, to the detriment of everyone else.
Why would you need to carefully review code? That is so 2024. You’re bottlenecking the process and are at a disadvantage when the AI could be working 24/7. We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.
AI “assistance” is a short intermediate phase, like the “centaurs” that Garry Kasparov was very fond of (human + computer beat both a human and a computer by itself… until the computer-only became better).
https://en.wikipedia.org/wiki/Advanced_chess
> We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.
Was your comment tongue-in-cheek? If not, where is this huge mass of AI-generated software?
All around you, just that it doesn’t make sense for developers to reveal that a lot of their work is now about chunking and refining the specifications written by the product owner.
Admitting such is like admitting you are overpaid for your job, and that a 20 USD AI-agent can do better and faster than you for 75% of the work.
Is it easy to admit that you have learnt skills for 10+ years that are progressively already getting replaced by a machine ? (like thousands of jobs in the past).
More and more, developer is going to be a monkey job where your only task is to make sure there is enough coal in the steam machine.
Compilers destroyed the jobs of developers writing assembler code, they had to adapt. They insisted that hand-written assembler was better.
Here is the same, except you write code in natural language. It may not be optimal in all situations but it often gets the job done.
> All around you, just that it doesn’t make sense for developers to reveal that
OK, but I asked for evidence and people just keep not providing any.
"God is all around you; he just works in mysterious ways"
OK, good luck with that.
Billions of people believe in god(s). In fact, 75 to 85% of the world population, btw.
And not that long ago, the majority of the population believed the Earth is flat, and that cigarettes are good for your health. Radioactive toys were being sold to children.
Wide belief does not equal truth.
And?
I have a complete proof that P=NP but it doesn't make sense to reveal to the world that now I'm god. It would crush their little hearts.
P = NP is less "crush their little hearts", more "may cause widespread heart attacks across every industry due to cryptography failing, depending on if the polynomial exponent is small enough".
A very very big if.
Also a sufficiently good exponential solver would do the same thing.
Good luck debugging
Except this one is (see your sibling).
Mine is. And it is awesome: https://github.com/banagale/FileKitty
The most recent release includes a MacOS build in a dmg signed by Apple: https://github.com/banagale/FileKitty/releases/tag/v0.2.3
I vibed that workflow just so more people could have access to this tool. It was a pain and it actually took time away from toenail clipping.
And while I didn't lay hands on a guitar much during this period, I did manage to build this while bouncing between playing Civil War tunes on a 3D-printed violin and generating music in Suno for a soundtrack to “Back on That Crust,” the missing and one true spiritual successor to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-4310-b9ef-815bdf742ecb
This app is concatenating files with an extra line of metadata added? You know this could be done in a few lines of shell script? You can then make it a finder action extension so it’s part of the system file manager app.
Sic transit gloria mundi
Only the simplest one is open (and before you discount it as too trivial, somehow none of the other ones did what I wanted) https://github.com/viraptor/pomodoro
The others are just too specific for me to be useful for anyone else: an android app for automatic processing of some text messages and a work scheduling/prioritising thing. The time to make them generic enough to share would be much longer than creating my specific version in the first place.
> and before you discount it as too trivial, somehow none of the other ones did what I wanted
No offense, it's really great that you are able to make apps that do exactly what you want, but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects" (as someone else suggested above). Complex real world software is different from pomodoro timers and TODO lists.
Cut it out with patronising, I work with complex software, which is why I specifically mentioned the only example I published was simple.
> but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"
Here's the thing though - it's already the case, because I wouldn't create those tools but hand otherwise. I just don't have the time, and they're too personal/edge-case to pay anyone to make them. So the comparison in this case is between 100% human developed non-existent software and AI generated project which exists. The latter wins in every category by default.
I don't think they're being patronizing, it's that "simple personal app that was barely worth making" is nice to have but not at all what they want evidence of.
Whether it was worth making is for me to judge since it is a personal app. It improves my life and work, so yes, it was very much worth it.
> The time to make them generic enough to share would be much longer than creating my specific version in the first place
Welcome to the reality of software development. "Works on my machine" is often not good enough to make the cut.
It doesn't matter that my thing doesn't generalise if someone can build their own customised solution quickly. But also, if I wanted to sell it or distribute it, I'd ensure it was more generic from the beginning.
You need to put your money where your mouth is.
If you comment about AI generated code in a thread about qemu (mission-critical project that many industries rely upon), a pomodoro app is not going to do the trick.
And no, it doesn't "show that is possible". qemu is not only more complex, it's a whole different problem space.
Not OP, but:
I'm getting towards the end of a vibe coded ZFS storage backend to ganeti that includes the ability to live migrate VMs to another host by: taking snapshot and replicating it to target, pausing VM, taking another incremental snapshot and replicating it, and then unpausing the VM on the new destination machine. https://github.com/linsomniac/ganeti/tree/newzfs
Other LLM tools I've built this week:
This afternoon I built a web-based SQL query editor/runner with results display, for dev/ops people to run read-only queries against our production database. To replace an existing super simple one, and add query syntax highlighting, snippet library, and other modern features. I can probably release this though I'd need to verify that it won't leak anything. Targets SQL Server.
A couple CLI Jira tools to pull a list of tickets I'm working on (with cache so I can get an immediate response, then get updates after Jira response comes back), and tickets with tags that indicate I have to handle them specially.
An icinga CLI that downtimes hosts, for when we do sweeping machine maintenances like rebooting a VM host with dozens of monitored children.
An Ansible module that is a "swiss army knife" for filesystem manipulation, merging the functions of copy, template, file, so you can loop over a list and: create a directory, template a couple files into it, doing a notify on one and a when on another, ensure a file exists if it doesn't already, to reduce duplication of boilerplate when doing a bunch of file deploys. This I will release as a ansible galaxy module once I have it tested a little more.
None of this seems relevant to the original claim: "Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"
I don't feel like it's meaningful to discuss the "competitiveness" of a handful of bespoke local or internal tools.
All the features you mentioned are not coming from the AI.
Here it is invoking the actual zfs commands:
https://github.com/ganeti/ganeti/compare/master...linsomniac...
All the extra python boilerplate just makes it harder to understand IMHO.
Looks like two commits:
https://github.com/linsomniac/ganeti/commit/e91766bfb42c67ab...
https://github.com/linsomniac/ganeti/commit/f52f6d689c242e3e...
Thanks, I hadn't pushed from my test cluster, check again. "This branch is 12 commits ahead of, 4 commits behind ganeti/ganeti:master"
I vibe-coded my own MySQL-compatible database that performs better than MariaDB, after my agent optimized it for 12 hours. It is also a time-traveling DB and performs better on all benchmarks and the AI says it is completely byzantine-fault-tolerant. Programmers, you had a nice run. /s
Not sure about parent but you could argue Jetbrains fancy auto complete is AI and generates a substantial portion of code. It runs using a local model and, in my experience, does pretty good at guessing the rest of the line with minimal input (so you could argue 80% of each line was AI generated)
How can you tell which project is which?
I mean, sure, there's plenty of devs who refuse to use AI, but how many projects rather than individuals are in each category?
And is Microsoft "traditional"? I name them specifically because their CEO claims 20-30% of their new code is AI generated: https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-3...
80-90% of Claude is now written by Claude
Using AI tools make AI tools is not the impact outside of the AI bubble that people are looking for.
And whose lunch is it eating?
[dead]
Your lunch, the developers behind Claude are very rich and do not need their developer career since they have enough to retire
Cigarettes do not cause cancer.
that's like driving big personal vehicles and having a bunch of children and eating a bunch of meat and do nothing about because marine and terrestrial ecosystems weren't fully destroyed by global warming
Ahh, there you go, environmental activists outright saying having children is considered a crime against nature. Wonderful, you seem to hit a rather bad stereotype right on the head. What is next? Earth would be better of if humanity was eradicated?
I feel like this is mostly proofless assertion. I'm aware what you hint at is happening, but the conclusions you arrive at are far from proven or even reasonable at this stage.
For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.
Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.
I think that was his point, the project full of bad developers isn't the competition. It is a peer whose skill matches yours and uses agents on top of that. By myself I am no match for myself + cline.
That’s true in the short term. Longer term it’s questionable as using AI tools heavily means you don’t remember all the details creating a new form of technical debt.
Dude, have you ever looked at code you wrote 6 months ago and gone "What was the developer thinking?" ;-)
yes, constantly. I also don't remember much contextual domain info of a given section of code about 2 weeks into delving into some other part of the same app.
So-called AI makes this worse.
Let me remind you of gyms, now that humans have been saved of much manual activity...
> So-called AI makes this worse.
I think that needs actual testing. At what time distances is there an effect, and how big is it? Even if there is an effect, it could be small enough that a mild productivity boost from AI is more important.
>So-called AI makes this worse.
The AI tooling is also really, really good at being able to piece together the code, the contextual domain, the documentation, the tests, the related issues/tickets, it could even take the change history into account, and be able to help refresh your memory of unfamiliar code in the context of bugs or new changes you are looking at making.
Whether or not you go to the gym, you are probably going to want to use an excavator if you are going to dig a basement.
I don't need to remember much, really. I have tools for that.
Really, really good tools.
IMO LLMs are best when used as locally-run offline search engines. This is a clear and obvious disruptive technology.
But we will need to get a lot better at finetuning first. People don't want generalist LLMs, they want "expert systems".
Speak for yourself, I prefer generalist LLMs. Also, the bitter lesson of ML applies.
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects
"competitive", meaning: "most features/lines of code emitted" might matter to a PHB or Microsoft
but has never mattered to open source
I am of two minds of it having now seen both good coders augmented by AI and bad coders further diminished by it ( I would even argue its worse than stack overflow, because back then they would at least would have had to adjust code a little bit ).
I am personally somewhere in the middle, just good enough to know I am really bad at this so I make sure that I don't contribute to anything that is actually important ( like QEMU ).
But how many people recognize their own strengths and weaknesses? That is part of the problem and now we are proposing that even that modicum of self-regulation ( as flawed as it is ) be removed.
FWIW, I hear you. I also don't have an answer. Just thinking out loud.
Regarding #1, at least in the mainframe/cloud model of hosted LLMs, the operators have a history of model prompts and outputs.
For example, if using Copilot, Microsoft also has every commit ever made if the project is on GitHub.
They could, theoretically, determine what did or didn't come out of their models and was integrated into source trees.
Regarding #2 and #3, with relatively novel software like QEMU that models platforms that other open source software doesn't, LLMs might not be a good fit for contributions. Especially where emulation and hardware accuracy, timing, quirks, errata etc matter.
For example, modeling a new architecture or emulating new hardware might have LLMs generating convincing looking nonsense. Similarly, integrating them with newly added and changing APIs like in kvm might be a poor choice for LLM use.
#2 is a complete and total fallacy, trivially disprovable.
Overall velocity doesn't come from writing a lot more code, or even from writing code especially quickly.
It seems to me that the point in your first paragraph argues against your points #2 and #3.
If a project allows AI generated contributions, there's a risk that they'll be flooded with low quality contributions that consume human time and resources to review, thus paralyzing the project - it'd be like if you tried to read and reply to every spam email you receive.
So the argument goes that #2 and #3 will not materialize, blanket acceptance of AI contributions will not help projects become more competitive, it will actually slow them down.
Personally I happen to believe that reality will converge somewhere in the middle, you can have a policy which says among other things "be measured in your usage of AI," you can put the emphasis on having contributors do other things like pass unit tests, and if someone gets spammy you can ban them. So I don't think AI is going to paralyze projects but I also think its role in effective software development is a bit narrower than a lot of people currently believe...
I am guessing they don't need people to prove that contributions didn't contain AI code, they just need the contributor to say they didn't use any AI code. That way, if any AI code is found in their contribution the liability lies with the contributor (but IANAL).
AFAIK in most places it might help with the amount of damages, but does not let you off the hook.
Quoting them:
> The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax.
So, no need for the drama.
None of your claims here are based in factual assertion. These are unproven, wishful fantasies that may or may not be eventually true.
No one should be evaluating or writing policy based on fantasy.
Are you familiar with the futures market? It’s all about what you call fantasy ! Similarly, if you are determining the strategy of your organization, all you have to help you is “fantasy”. By the time evidence exists in sufficient quantity your lunch has already been eaten long ago. A good CEO is one that can see where the market is going before anyone else. You may be right that AI is just a fad , but given how much the big companies and all the major startups in the last few years are investing on it, it’s overwhelmingly a fringe position to have at this point.
Both the futures market and resource planning are based on evidential standards (usually). When you make those decisions without any reasoning, you are gambling, and might as well go to the casino.
But notably, FOSS development is neither a corporation or stock trading. It is focused on longevity and maintainability.
> Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects.
There is zero evidence so far that AI improves software developer efficiency.
No, just because you had fun vibing with a chatbot doesn't mean you delivered the end product faster. All of the supposed AI software development gains are entirely self-reported based on "vibes". (Remember these are the same people who claimed massive developer efficiency gains from programming in Haskell or Lisp a few years back.)
Note I'm not even touching on the tech debt issue here, but it is also important.
P.S. The hallucination and counting to five problems will never go away. They are intrinsic to the LLM approach.
A reasonable conclusion about this would simply be that the developers are saying "we're not merging anything which you can't explain".
Which is entirely reasonable. The trend of people say, on HN saying "I asked an LLM and this is what it said..." is infuriating.
It's just an upfront declaration that if your answer to something is "it's what Claude thinks" then it's not getting merged.
That’s not what the policy says, however. You could be the world’s most honest person, using Claude only to generate code you described to it in detail and fully understand, and would still be forbidden.
If AI can generate software so easily and which performs the expected functions, why do we even need to know that it did so? Isn't the future really just asking an AI for a result and getting that result? The AI would be writing all sorts of bespoke code to do the thing we ask, and then discard it immediately after. That is what seems more likely, and not 'so much software we have to figure out rights to'.
> If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust.
Yeah I don’t think so. But if it does then who cares? AI can just make a better QEMU at that point I guess.
They aren’t hurting anyone with this stance (except the AI hype lords), which I’m pretty sure isn’t actually an anti-AI stance, but a pragmatic response to AI slop in its current state.
Seems like a fake problem. Who would sue QEMU for using AI-generated code? OpenAI? Anthropic?
Anyone whose code is in a used model's training set.*
This is about future existential tail risk, not current risk.
* Depending on future court decisions in different jurisdictions
Is there any likelihood that the output of the model would be public domain? Even if the model itself is public domain, the prompt was created by a human and impacted the output, so I don't see how the output could be public domain. And then after that, the output was hopefully reviewed by the original prompting human and likely reviewed by another human during code review, leading to more human impact on the final code.
There is no copyright in AI art. Presumably the same reasoning would apply to AI code: https://iclg.com/news/22400-us-court-confirms-ai-generated-a...
This particular case is US only.
The rest of the world might decide differently.
Absolutely.
And as long as you're not worried about people in the USA reusing your code then you're all good!
Proprietary source code would not usually end up training LLMs. Unless its leaked, how would an LLM have access to it?
> it would require speculative copyright owners to disassemble their binaries
I wonder whether AI might be a useful tool for making that easier.
If you have evidence then you can get courts to order disclosure or examination of code.
> And plenty of proprietary software has public domain code in it already.
I am pretty sure there is a significant amount of proprietary code that has FOSS code in it, against license terms (especially GPL and similar).
A lot of proprietary code is now been written using AIs trained on FOSS code, and companies are open about this. It might open an interesting can of worms.
> Unless its leaked
Given the number of people on HN that say they're using for e.g. Cursor, OpenAI, etc. through work, and my experience with workplaces saying 'absolutely you can't use it', I suspect a large amount is being leaked.
I thought most of these did not use users context and input for training?
Licence incompatibility is enough.
This is a win for MIT license though.
From what point of view?
For someone using MIT licensed code for training, it still requires a copy of the license and the copyright notice in "copies or substantial portions of the software". SO I guess its fine for a snippet, but if the AI reproduces too much of it, then its in breach.
From the point of view of someone who does not want their code used by an LLM then using GPL code is more likely to be a breach.
> or public domain
https://news.artnet.com/art-world/ai-art-us-copyright-office...
https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
Im pretty sure that this ship has sailed.
It's sailed, but towards the other way: https://www.bbc.com/news/articles/cg5vjqdm1ypo
That's a brand new ongoing lawsuit. The ship hasn't sailed in either direction yet. It hasn't even been clearly established if Midjourney has liability let alone where the bounds for such liability might lie.
Remember, anyone can attempt to sue anyone for anything at any time in a functional system. How far the suit makes it is a different matter.
On the contrary. IANAL, but this is my understanding of the law (setting aside the "work for hire" thing for simplicity)
1. If you come up with something completely new, you are the sole copyright holder.
2. If you take someone else's copyrighted work and transform it, then both of you have a copyright on the derivative work.
So if you write a brand new comic book that includes Darth Vader, you can't sell that without Disney's permission [1]: they have a copyright on Darth Vader, and so your comic book is partly copyrighted by them. But at the same time, they can't sell it without your permission, because you have a copyright on the comic book too.
In the case of Midjourney outputs, my understanding of the current state of the law is this:
1. Only humans can create copyrights
2. So if Midjourney creates an entirely new image that's not derivative of anyone else's work (as defined by long-established copyright law on derivative works), then nobody owns the copyright, and it's in the public domain
3. If Midjourney creates an image that is derived from someone else's work (as defined by long established copyright law on derivative works), then only Disney has a copyright on that derivative work.
And so, in theory, Disney could distribute Darth Vader images you made with Midjourney, unless you can convince the court that you had enough creative influence over them to warrant a copyright.
[1] Yes of course fair use, trying to make a point here
Doesn’t this also mean that if you transform the work created by Midjourney, you now have a copyright on the transformed work?
I wonder what counts for transformed, is a filter enough or does it have to be more than that?
That's my understanding, yes. "What counts as transformed" is fuzzy, but it's an old well-established problem with hundreds of years of case law.
https://www.wired.com/story/ai-art-copyright-matthew-allen/
https://www.cnbc.com/2025/03/19/ai-art-cannot-be-copyrighted...
Here are cases where the product of AI/ML are not the products of people and not capable of being copyrighted. These are about the OUTPUT being unable to be copyrighted.
QEMU: Define policy forbidding use of AI code generators
If a software is truly wide open source in the sense of “do whatever the fuck you want with this code, we don’t care”, then it has nothing to fear from AI.
Won't apply to closed source, not public code, which the GPL (QEMU uses) is quite good at ensuring becomes open source...
Open source is about sharing the source code. You generally need to force companies to share their source code derived from your project, or else companies will simply take it, modify it, and never release their changes,and charge for it too.
Sharing is caring, being forced to share does not foster care.
Companies don't care, so if you release something as open source that's relevant to them, "companies will simply take it, modify it, and never release their changes,and charge for it too" - but that is what companies do, that is their very nature, and you knew that when you first opened the source.
You also knew that when you picked a license, and it's a major reason for the particular choice you made. Want to force companies to share? Pick GPL.
If you decide to yoke a dragon, and it instead snatches your shiny lure and flies away to its cave, you don't get to complain that the dragon isn't playing nice and doesn't want to become your beast of burden. If you picked MIT as your license, that's on you.
Can't release someone else's proprietary source under a "do whatever the fuck you want" license and actually do whatever the fuck you want, without getting sued.
The license does exist so you can release your own software under it, however: https://en.wikipedia.org/wiki/WTFPL
Only more reason for OSS to embrace AI generation - once it leaks into enough widely used or critical (think cURL) dependencies and exceeds certain critical mass, any judgement on the IP aspects other than "public domain" (in the broader sense) will become infeasible, as enforcing a different judgement would be like doing open heart surgery on the global economy.
That's the situation we're already in with copyleft licences but legal teams still treat them like the plague.
You can do that but the fact you don't get sued is more luck than judgement.
It’d be like trying to squeeze blood from a stone
It's incredible watching someone who has no idea what they're talking about boast so confidently about what people "can" or "can't" do
It'd be like trying to squeeze blood from every single entity using the offending code, actually.
Interesting. Harder line than the LLVM one found at https://llvm.org/docs/DeveloperPolicy.html#ai-generated-cont...
I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.
I don't want to review code the author doesn't understand
This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.
Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.
This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.
> This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.
This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.
The unfortunate truth is that approach does work, sometimes. It's really easy and common for capable engineers to think their way out of doing something because of all the different things they can think about it.
Sometimes, an unreasonable dumbass whose only authority comes from corporate heirarchy is needed to mandate the engineers start chipping away at the tasks. If they weren't a dumbass, they'd know the unreasonable thing they're mandating, and if they weren't unreasonable, they wouldn't mandate the someone does it.
I am an an engineer. "Sometimes" could be swapped for "rarely" above, but the point still stands: as much frustration as I have towards those people, they do occasionally lead to the impossible being delivered. But then again, a stopped clock -> twice a day etc.
That approach sometimes does work, but usually very poorly and often not at all.
It can work very well when the higher-up is well informed and does have deep technical experience and understanding. Steve Jobs and Elon Musk are great, well-known examples of this. They've also provided great examples of the same approach mostly failing when applied outside of their areas of deep expertise and understanding.
You can change "software" to "hardware" and this is still an all too common viewpoint, even for engineers that should know better.
Imagine a boring dystopia where everyone is given hallucinated tasks from LLMs that may in some crazy way be feasible but aren't, and you can't argue that they're impossible without being fired since leadership lacks critical thinking.
Reminds me of the wonderful skit, The Expert: https://www.youtube.com/watch?v=BKorP55Aqvg
And the solution: https://www.youtube.com/watch?v=B7MIJP90biM
That is incredibly accurate - I used to be at meetings like that monthly. Please submit this as an HN discussion.
That is a very good description of the Paranoia RPG.
Unfortunately this is the most likely outcome.
I’ve started to experience/see this and it makes me want to scream.
You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.
I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.
It's simply the Bullshit Asymmetry Principle/Brandolini's Law. It's just that bullshit generation speedrunners have recently discovered tool-assists.
A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".
I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.
Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.
The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.
And of course it didn't work.
But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.
I experienced mentioning an issue I was stuck on during standup one day, then some guy on my team DMs me a screenshot of chatGPT with text about how to solve the issue. When I explained to him why the solution he had sent me didn't make sense and wouldn't solve the issue, he sent me back the reply the LLM would give by pasting in my reply, at which point I stopped responding.
I'm just really confused what people who send LLM content to other people think they are achieving? Like if I wanted an LLM response, I would just prompt the LLM myself, instead of doing it indirectly though another person who copy/pastes back and forth.
> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.
A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."
Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.
Sounds like a teachable moment.
If it's that simple, sounds like you've got your solution! Go ahead and take care of it. If it fits V&V and other normal procedures, like passing tests and documentation, then we'll merge it in. Shouldn't be a problem for you since it will only take a moment.
Reminds me of "Appeal to Aithority". (not a typo)
An LLM said it, so it must be true.
https://blog.ploeh.dk/2025/03/10/appeal-to-aithority/
People keep asking me if AI is going to take my job and recent experience shows that it very much is not. AI is great for being mostly correct and then giving someone without enough context a mostly correct way to shoot themselves in the foot.
AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.
AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.
Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.
The question about whether it takes jobs away is more whether one programmer with taste can multiply their productivity between ~3-15x and take the same salary while demand for coding remains constant. It's less about whether the tool can directly replace 100% of the functions of a good programmer.
In corporate, you are _forced_ to trust your coworker somehow and swallow it. Specially higher-ups.
In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.
It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.
> Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.
I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.
On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.
On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).
My company hired a new CTO and he asked chatgpt to write some lengthy documents about "how engineering gets done in our company".
He also writes all his emails with chatgpt.
I don't bother reading.
Oddly enough he recently promoted a guy who has been fucking around with LLMs for years instead of working as his right hand man.
That's directly lethal, in a limited sympathy with engineers that don't immediately head for the exit sort of fashion. Best of luck
It's the modern equivalent of sending a LMGTFY link, except the insult is from them being purely credulous and sincere
Especially when you try to correct them and they insist AI is the correct one
Sometimes it's fun reverse engineering the directions back into various forum, Stack Overflow, and documentation fragments and pointing out how AI assembled similar things into something incorrect
I have just started adding DCO to _all_ of the open source code that I maintain and will be adding text like this on `CONTRIBUTING.md`:
---
LLM-Generated Contribution Policy
Color is a library full of complex math and subtle decisions (some of them possibly even wrong). It is extremely important that any issues or pull requests be well understood by the submitter and that, especially for pull requests, the developer can attest to the Developer Certificate of Origin for each pull request (see LICENCE).
If LLM assistance is used in writing pull requests, this must be documented in the commit message and pull request. If there is evidence of LLM assistance without such declaration, the pull request will be declined.
Any contribution (bug, feature request, or pull request) that uses unreviewed LLM output will be rejected.
---
I am also adding this to my `SECURITY.md` entries:
---
LLM-Generated Security Report Policy
Absolutely no security reports will be accepted that have been generated by LLM agents.
---
As it's mostly just me, I'm trying to strike a balance, but my preference is against LLM generated contributions.
> any issues or pull requests be well understood by the submitter
I really like this phrasing, particularly in regards to PRs. I think I'll find a way to incorporate this into my projects. Even for smaller, non-critical projects, it's such a distraction to deal with people trying to make "contributions" that they don't clearly understand.
I do use GitHub copilot on my personal projects.
But I refuse to use it as anything more than a fancy autocomplete. If it suggests code that's pretty close to what I was about to type anyway, I accept it.
This ensures that I still understand my code, that there shouldn't be any hallucination derived bugs, [1] and there really shouldn't be any questions about copyright if I was about to type it.
I find using copilot this way speeds me up. Not really because my typing is slow, it's more that I have a habit of getting bored and distracted while typing. Copilot helps me get to the next thinking/debugging part sooner.
My brain really comprehend the idea that anyone would not want to not understand their code. Especially if they are going to submit it as a PR.
And I'm a little annoyed that the existence of such people is resulting in policies that will stop me from using LLMs as autocomplete when submitting to open source projects.
I have tried using copilot in other ways. I'd love for it to be able to do menial refactoring tasks for me. But every-time I experiment, it seems to fall off the rails so fast. Or it just ends up slower than what I could do manually because it has to re-generate all my code instead of just editing it.
[1] Though I find it really interesting that if I'm in the middle of typing a bug, copilot is very happy to autocomplete it in its buggy form. Even when the bug is obvious from local context, like I've typoed a variable name.
That’s how I use it too. I’ve tried to make agent mode work but it ends up taking just as long if not longer than just making the edits myself. And unless you’re very narrowly specific models like sonnet will go off track making changes you never asked for. At least gpt4.1 is pretty lazy I guess.
When I use LLM for coding tasks, it's like "hey please translate this YAML to structs and extract any repeated patterns to re-used variables". It's possible to do this transform with deterministic tools, but AI will do a fine job in 30s and it's trivial to test the new output is identical to the prompt input.
My high-level work is absolutely impossible to delegate to AI, but AI really helps with tedious or low-stakes incidental tasks. The other day I asked Claude Code to wire up some graphs and outlier analysis for some database benchmark result CSVs. Something conceptually easy, but takes a fair bit of time to figure out libraries and get everything hooked up unless you're already an expert at csv processing.
There is ongoing discussion about this topic in the QEMU AI policy: https://lore.kernel.org/qemu-devel/20250625150941-mutt-send-...
In my experience, AI will not do a fine job of things like this.
If the definition is past any sort of length, it will hallucinate new properties, change the names, etc. It also has a propensity to start skipping bits of the definitions by adding in comments like "/** more like this here **/"
It may work for you for small YAML files, but beware doing this for larger ones.
Worst part about all that is that it looks right to begin with because the start of the definitions will be correct, but there will be mistakes and stuff missing.
I've got a PoC hanging around where I did something similar by throwing an OpenAPI spec at an AI and telling it to generate some typescript classes because I was being lazy and couldn't be bothered to run it through a formal tool.
Took me a while to notice a lot of the definitions had subtle bugs, properties were missing and it had made a bunch of stuff up.
For bigger inputs I have the AI write the new output to an adjacent file and diff the two to confirm equivalence
What does "AI" mean? GPT3.5 on a website, or Claude 4 Opus plugged into function calling and a harness of LSP, type checker and tool use? These are not the same, neither in terms of output quality nor in capability space. We need to be more specific about the tools we use when we discuss them. "IDEs are slow to load" wouldn't be a useful statement either.
oh agree and amplify this -- graphs are worlds unto themselves. some of the high end published research papers have astounding contents, for example..
You’re the exact kind of person I want to work with. Self reflective and in opposition of lazy behaviours.
>I don't want to review code the author doesn't understand
I get that. But the AI tooling when guided by a competent human can generate some pretty competent code, a lot of it can be driven entirely through natural language instructions. And every few months, the tooling is getting significantly more capable.
I'm contemplating what exactly it means to "understand" the code though. In the case of one project I'm working on, it's an (almost) entirely vibe-coded new storage backend to an existing VM orchestration system. I don't know the existing code base. I don't really have the time to have implemented it by hand (or I would have done it a couple years ago).
But, I've set up a test cluster and am running a variety of testing scenarios on the new storage backend. So I understand it from a high level design, and from the testing of it.
As an open source maintainer myself, I can imagine (thankfully I haven't been hit with it myself) how frustrating getting all sorts of low quality LLM "slop" submissions could be. I also understand that I'm going to have to review the code coming in whether or not the author of the submission understands it.
So how, as developers, do we leverage these tools as appropriate, and signal to other developers the level of quality in code. As someone who spent months tracking down subtle bugs in early Linux ZFS ports, I deeply understand that significant testing can trump human authorship and review of every line of code. ;-)
> I'm contemplating what exactly it means to "understand" the code though.
You can't seriously be questioning the meaning of "understand"... That's straight from Jordan B. Peterson's debate playbook which does nothing but devolve the conversation into absurdism, while making the person sound smart.
> I've set up a test cluster and am running a variety of testing scenarios on the new storage backend. So I understand it from a high level design, and from the testing of it.
You understand the system as well as any user could. Your tests only prove that the system works in specific scenarios, which may very well satisfy your requirements, but they absolutely do not prove that you understand how the system works internally, nor that the system is implemented with a reliable degree of accuracy, let alone that it's not misbehaving in subtle ways or that it doesn't have security issues that will only become apparent when exposed to the public. All of this might be acceptable for a tool that you built quickly which is only used by yourself or a few others, but it's far from acceptable for any type of production system.
> As someone who spent months tracking down subtle bugs in early Linux ZFS ports, I deeply understand that significant testing can trump human authorship and review of every line of code.
This doesn't match my (~20y) experience at all. Testing is important, particularly more advanced forms like fuzzing, but it's not a failproof method of surfacing bugs. Tests, like any code, can itself have bugs, it can test the wrong things, setup or mock the environment in ways not representative of real world usage, and most importantly, can only cover a limited amount of real world scenarios. Even in teams that take testing seriously, achieving 100% coverage, even for just statements, is seen as counterproductive and as a fool's errand. Deeply thorough testing as seen in projects like SQLite is practically unheard of. Most programmers I've worked with will often only write happy path tests, if they bother writing any at all.
Which isn't to say that code review is the solution. But a human reviewing the code, building a mental model of how it works and how it's not supposed to work, can often catch issues before the code is even deployed. It is at this point that writing a test is valuable, so that that specific scenario is cemented in the checks for the software, and regressions can be avoided.
So I wouldn't say that testing "trumps" reviews, but that it's not a reliable way of detecting bugs, and that both methods should ideally be used together.
This to me is interesting when it comes to free software projects; sure there are a lot of people contributing as their day job. But if you contribute or manage a project for the pleasure of it, things which undermine your enjoyment - cleaning up AI slop - are absolutely a thing to say "fuck off" over.
> I don't want to review code the author doesn't understand
The author is me and my silicon buddy. We understand this stuff.
Of course we understand it. Just ask us!
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol
Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.
I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
By far the largest review-effort PRs of my career have been in the past year, due to mid-sized LLM-built features. Multiple rounds of other signoffs saying "lgtm" with only minor style comments only for me to finally read it and see that no, it is not even remotely acceptable and we have several uses built by the same team that would fail immediately if it was merged, to say nothing of the thousands of other users that might also be affected. Stuff the reviewers have experience with and didn't think about because they got stuck in the "looks plausible" rut, rather than "is correct".
So it goes back for changes. It returns the next day with complete rewrites of large chunks. More "lgtm" from others. More incredibly obvious flaws, race conditions, the works.
And then round three repeats mistakes that came up in round one, because LLMs don't learn.
This is not a future style of work that I look forward to participating in.
I think a future with LLM coding requires much more tests, both testing happy and bad flows.
I think the issue is with people taking mental shortcuts and thus no longer properly thinking about design decisions and the bigger picture in terms of concepts of the software.
It also needs proper guideline enforcement. If an engineer produces poorly tested and unreviewed code, then the buck stops with them. This is a human problem more than it is a tool problem.
I'm not really in the field any longer, but one of my favorite things to do with LLMs is ask for code reviews. I usually end up learning something new. And a good 30-50% of the suggestions are useful. Which actually isn't skillful enough to give it a title of "code reviewer", so I certainly wouldn't foist the suggestions on someone else.
funny enough I had coworkers who similarly had a hold of the jargon but without any substance. They would always turn out to be time sinks for others doing the useful work. AI imitating that type of drag on the workplace is kinda funny ngl.
Probabilistic patterns stringed together are something different from an end-to-end intention driven solidly linked chain of thought that is with pylons grounded in relevant context at critical points.
Yep 100%, it is something I have also observed. Frankly has been frustrating to the point I spun up a quick one off html site to rant/get my thoughts out. https://jaysthoughts.com/aithoughts1
Just some feedback: your site is hard to read on mobile devices because of the sidebar.
Thank you, I'll get that fixed.
Edit: Mobile should be fixed now
People keep telling LLM will improve efficiency, but your comment has proved it's the otherwise.
It look like LLM is not good for cooperation, because the nature of LLM is randomness.
> Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions.
Yep, and it's not just code. Student essays, funding applications, internal reports, fiction, art...everything that AI touches has this problem that AI outputs look superficially similar to the work of experts.
I have learned over time that the actually smart people worth listening to, avoid jargon beyond what is strictly necessary, talk in simple terms with specific goals/improvements/changes in mind.
If I'm having to reread something over and over to understand what they're even trying to accomplish, odds are it's either AI generated or an attempt at sounding smart instead of being constructive.
Trajectory so far has been that AI outputs are converging increasingly not just in superficial similarity but also quality of expert output. We are obviously not there yet, and some might say we never will. But if we do, there is a whole new conversation to be had.
I suspect that there are at least 1 or 2 more significant discoveries in terms of architecture and general way of models working, before these things become actual experts. Maybe they will never get there and we will discover how to better incorporate facts and reasoning, rather than just ingesting billions of training data points.
send your blog link please
https://jaysthoughts.com/aithoughts1 Bit of a rambly rant, but the prediction stuff I was tongue in cheek referring to above is at the bottom.
Looks like your blog post got submitted here and then I assume triggered the flame war flag. A lot of people just reading the title and knee jerking in the comments:
https://news.ycombinator.com/item?id=44384610
Funny, as the entire thing starts off with "Now, full disclosure, the title is a bit tongue-in-cheek.".
I suppose I did bring that on myself with the title didn't I. I believe I have fixed the site for mobile so hopefully some of those thread complaints have been rectified.
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.
I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.
Most hypervisors are closed source and some are developed by litigious companies.
> but rather the risk that AI will spit out code from its training set that belongs to another project.
this is everything that it spits out
This is an uninformed take
It is a legally untested take
No, this is an uninformed take.
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.
I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.
So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.
> Generally speaking AI will make it easier to write more secure code
In my personal experience, not at all.
While LLMs are really good at generating content, one of their key weaknesses is their (relative) inability to detect _missing_ content.
I'd argue that the most impactful software security bugs in the last couple of decades (Heartbleed etc) have been errors of omission, rather than errors of inclusion.
This means LLMs are:
1) producing lots more code to be audited
2) poor at auditing that code for the most impactful class of bugs
That feels like a dangerous combination.
I'd doubt this very much - LLMs hallucinate API calls and commit all sorts of subtle errors that you need to catch (esp. if you're on proprietary problems which it's not trained on).
It's a good replacement for Google, but probably nothing close to what it's being hyped out to be by the capital allocators.
I wonder whether the motivation is really legal? I get the sense that some projects are just sick of reviewing crap AI submissions
Possibly, but QEMU is such a critical piece software in our industry. Its application stretches from one end to the other - desktop VM, cloud/remote instance, build server, security sandbox, cross-platform environment, etc. Even a small legal risk can hurt the industry pretty badly.
The policy is concise and well bounded. It seems to me to assert that you cannot safely assign attribution of authorship of software code that you think was generated algorithmically.
I use the term algorithmic because I think it is stronger than "AI lol". I note they use terms like AI code generator in the policy, which might be just as strong but looks to me as unlikely to becoming a useful legal term (its hardly "a man on the Clapham omnibus").
They finish with this, rather reasonable flourish:
"The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax."
No doubt they do get a load of slop but they seem to want to close the legal angles down first and attribution seems a fair place to start off. This play book looks way better than curl's.
Have you seen how Monsanto enforces their seed right?
This could honestly break open source, with how quickly you can generate bullshit, and how long it takes to review and reject it. I can imagine more projects going the way of Android where you can download the source, but realistically you can't contribute as a random outsider.
I have an online acquaintance that maintains a very small and not widely used open-source project and the amount of (what we assume to be) automated AI submissions* they have to wade through is kinda wild given the very small number of contributors and users the thing has. It's gotta be clogging up these big projects like a DDoS attack.
*"Automated" as in bots and "AI submissions" as in ai-generated code
I've always thought that the possibility of forking the project is the main benefit to open-source licensing, and we know Android can be forked.
the primary benefit of open source is freedom
This is so tautological that I can't really tell what point you're trying to make.
how can it possibly be tautological? The comment just above me said something entirely different: that the primary benefit of open source is forking
For many projects you realistically can't contribute as a random outsider anyway, simply because of the effort involved in grokking enough of the existing architecture to figure out where to make changes.
I think it is yet another reason (potentially malicious contributors are another) that open source projects are going to have to verify contributors.
Quality contributions to OSS are rare unless the project is huge.
Historically the opposite of quality contributions has been no contributions, not net-negative contributions (random slop that costs more in review than it provides benefit).
No it hasn't? Net-negative contributions to open source have been extremely common for years, it's not like you need an LLM to make them.
I guess we've had very different experiences!
i mean they say the policy is open for revision and it's also possible to make exceptions; if it's an excuse, they are going out of their way to let people down easy
I'm not sure which way AI would move the dial when it comes to the median submission. Humans can, and do, make some crap code.
If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.
Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.
Barrier of entry, automated submissions are two aspects I see changing with AI. You at least have to be able to code before submitting bad code.
With AI you're going to get job hunters automating PRs for big name projects so they can stick the contributions in their resume.
> If the problem is too many submissions, that would suggest there needs to be structures in place to manage that. > Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.
I thought that this could be an opportunity for volunteers who can't dedicate the time to learn a codebase thoroughly enough to be a regular committer. They just have to evaluate a patch to see if it meets a threshold of quality where they can pass it on to someone who does know the codebase well.
The barrier to being able to do a first commit on any project is usually quite high, there are plenty of people who would like to contribute to projects but cannnot dedicate the time n effort to pass that initial threshold. This might allow people an ability to contribute at a lower level while gently introducing them to the codebase where perhaps they might become a regular contributer in the future.
It is interesting to read the pro-AI rant in the comments on the linked commit. The person who is threatening to use "AI" anyway has almost no contributions either in qemu or on GitHub in general.
This is the target group for code generators. All talk but no projects.
I'd hope there could be some distinction between using LLM as a super autocomplete in your IDE, vs giving it high-level guidelines and making it generate substantive code. It's a gray area, sure, but if I made a contribution I'd want to be able to use the labor-saving feature of Copilot, say, without danger of it copying an algorithm from open source code. For example, today I generated a series of case statements and Copilot detected the pattern and saved me tons of typing.
That and also just AI glasses that become an extension of my mind and body, just giving me clues and guidance on everything I do including what's on my screen.
I see those glasses as becoming just a part of me, just like my current dumb glasses are a part of me that enables me to see better, the smart glasses will help me to see AND think better.
My brain was trained on a lot of proprietary code as well, the copyright issues around AI models are pointless western NIMBY thinking and will lead to the downfall of western civilization if they keep pursuing legal what-ifs as an excuse to reject awesome technology.
This seems absolutely impossible to enforce. All my editors give me AI assisted code hints. Zed, cursor, VS code. All of them now show me autocomplete that comes from an LLM. There's absolutely no distinction between that code, and code that I've typed out myself.
It's like complaining that I may have no legal right to submit my stick figure because I potentially copied it from the drawing of another stick figure.
I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway. There's no way the people that write these things aren't aware they're completely unenforceable.
> I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway.
Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:
And in the patch itself: What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.It’s like the seemingly-confusing gates passing through customs that say “nothing to declare” when you’ve already made your declarations. Walking through that gate is a conscious act that places culpability on you, so you can’t simply say “oh, I forgot” or something.
The thinking here is probably similar: if AI-generated code becomes poisonous and is detected in a project, the DCO could allow shedding liability onto the contributor that said it wasn’t AI-generated.
> Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message
Don’t be ridiculous. The majority of people are in fact honest, and won’t submit such code; the major effect of the policy is to prevent those contributions.
Then you get plausible deniability for code submitted by villains, sure, but I’d like to hope that’s rare.
I think most people don't make money by submitting code to QEMU, so there isn't that much incentive to cheat.
Neovim doesn't force you to use AI, unless you configure it yourself. If your editor doesn't allow you to switch it off, there must be a big problem with it.
I understand where this comes from but I think it's a mistake. I agree it would be nice if there were "well settled law" regarding AI and copyright, probably relatively few rulings and next to zero legislation on which to base their feelings.
In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."
What's the risk of not doing this? Better code but slower velocity for an open source project?
I think that particular brand of risk makes sense for this particular project, and the authors don't seem particularly negative toward GenAI as a concept, just going through a "one way door" with it.
>Better code but slower velocity for an open source project
Better code and "AI assist coding" are not exclusive of each other.
It's a simpler solution is just to wait until legal situation is clearer.
QEMU is (mostly) GPL 2.0 licensed, meaning (most) code contributions need to be GPL 2.0 compatible [0]. Let's say, hypothetically, there's a code contribution added by some patch involving gen AI code which is derived/memorised/copied from non-GPL compatible code [1]. Then, hypothetically, a legal case sets precedent that gen AI FOSS code must re-apply the license of the original derived/memorised/copied code. QEMU maintainers would probably need to roll back all those incompatible code contributions. After some time, those code contributions could have ended up with downstream callers which also need to be rewritten (even in CI code).
It might be possible to first say "only CI code which is clearly labelled as 'DO NOT RE-USE: AI' or some such". But the maintainers would still need to go through and rewrite those parts of the CI code if this hypothetical plays out. Plus it adds extra work to reviews and merge processes etc.
it's just less work and less drama for everyone involved to say "no thank you (for now)".
----
caveat: IANAL, and licensing is not my specific expertise (but i would quite like it to be one day)
[0]: https://github.com/qemu/qemu/blob/master/LICENSE
[1]: e.g. No license / MPL / Apache / Aritistic / Creative Commons https://www.gnu.org/licenses/license-list.html#NonFreeSoftwa...
This isn't like some other legal questions that go decades before being answered in court. There are dozens of cases working through the courts today that will shed light on some aspects of the copyright questions within a few years. QEMU has made great progress over the last 22 years without the aid of AI, waiting a few more years isn't going to hurt them.
I think you need to read between the lines here. Anything you do is a legal risk, but this particular risk seems acceptable to many of the world's largest and richest companies. QEMU isn't special, so if they're taking this position, it's most likely simply because they don't want to deal with LLM-generated code for some other reason, are eager to use legal risk as a cover to avoid endless arguments on mailing lists.
We do that in corporate environments too. "I don't like this" -> "let me see what lawyers say" -> "a-ha, you can't do it because legal says it's a risk".
There is a well settled practice in computing that you just don't plagiarize code. Even a small snippet. Even if copyright law would consider such a small thing "fair use".
> There is a well settled practice in computing that you just don't plagiarize code. Even a small snippet.
I think way many developers use StackOverflow suggests otherwise.
In the first place, in order to post to StackOverflow, you are required to have the copyright over the code, and be able to grant them a perpetual license.
They redistribute the material under the CC BY-SA 4.0 license. https://creativecommons.org/licenses/by-sa/4.0/
This allows visitors to use the material, with attribution. One can, of course, use the ideas in a SO answer to develop one's own solution.
> you are required to have the copyright over the code, and be able to grant them a perpetual license.
Which Stack Overflow cannot verify. It might be pulled from a code base, or generated by AI (I would bet a lot is now).
Show me the professional code base with the attribution to stack overflow and I'll eat my hat.
Obviously I cannot show the code base, but when I pick a pre-existing solution from Stackoverflow or elsewhere—though it is quite rare—I do add a comment linking to the source: after all, in case of SA the discussion there might be interesting for the future maintainers of the function.
I just checked, though, and the code base I'm now working with has eight stackoverflow links. Not all are even written by me, according to quick check with git blame and git log -S..
I always do to, for exactly the same reason.
This isn't 100% true meaning it isn't well settled. Have people already forgotten Google vs Oracle? Google ended up winning that after years and years but the judgements went back and forth and there are around 4 or 5 guidelines to determine whether something is or isn't fair use and generative AI would fail at a few of those.
Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust. Oracle wanted the interface itself to be copyrighted so that even if someone reproduced the API from a description of it, it would infringe. The implication being that components which clone an API would be infringing, even though their implementation is original, discouraging competitors from making API-compatible components.
My comment didn't say anything about the output of AI being fair use or not, rather that fair use (no matter where you are getting material from) ipso facto doesn't mean that copy paste is considered okay.
Every employer I ever had discouraged copy and paste from anywhere as a blanket rule.
At least, that had been the norm, before the LLM takeover. Obviously, organizations that use AI now for writing code are plagiarizing left and right.
> Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust.
In addition to the Structure, Sequence and Organization claims, the original filing included a claim for copyright violation on 9 identical lines of code in rangeCheck(). This claim was dropped after the judge asked Oracle to reduce the number of claims, which forced Oracle to pare down to their strongest claims.
I've been trying out Claude Code (the tool I've found most effective in terms of agentic code gen/manipulation) for an emulator project of mine for the last few days. Part of it is a compiler from an architecture definition to disassembler/interpreter/recompiler. I hit a fairly minor compiler bug and decided to ask Claude to debug and fix it. Some things I noted:
1. My C# code compiled just fine and ran even, but it was convinced that I was missing a closing brace on a lambda near where the exception was occurring. The diff was ... Putting the existing brace on a new line. Confidently stated that was the problem and declared it fixed.
2. It did figure out that an unexpected type was being seen, and implemented a pathway that allowed for it to get to the next error, but didn't look into why that type had gotten there; that was the actual bug, not the unhandled type. So it "fixed" it, but just kicked the can down the road.
3. When figuring out the issue, it just looked at the stack trace. That was it. It was running the compiler itself; it could've just embedded some debug code (like I did) and work out what the actual issue was, but it didn't even try. The exception was just a NotSupportedException with no extra details to work off of, so adding just a crumb of context would let you solve the issue.
Now, is this the simplest emulator you could throw AI at? No, not at all. But neither is qemu. I'm thoroughly unconvinced that current tools could provide real value on codebases like these. I'm bullish on them for the future, and I use GenAI constantly, but this ain't a viable use case today.
BigTech now control Qemu?
"Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>"
> It's best to start strict and safe, then relax.
Makes total sense.
I am just wondering how do we differentiate between AI generated code and human written code that is influenced or copied from some unknown source. The same licensing problem may happen with human code as well especially for OSS where anyone can contribute.
Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
> Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
It’s a power saw. A really powerful tool that can be dangerous if used improperly. In that sense the code generator can have more or less of a mind of its own depending on the wielder.
Ok I think I’ve stretched the analogy to the breaking point…
there's no audit trail for how most code gets shaped anyway we're teammate's intuition from a past outage a one-liner from some old jira ticket even the shape of a func pulled from habit none of that is reviewable but still it gets trusted lol
ai moves faster than group consensus this ban won't slow down the tech it'll may make paradigms like qemu harder to enter harder to scale, harder to test thru properly
so if we maintain code like this we gotta know the trade we're making we're preserving trust but limiting throughput maybe fine idk but don't confuse it as future proofing
i kinda feel it does exposes trust in oss is social not epistemic. we accept complex things if we know who dropped it and we reject clean things if it smells synthetic
so the real qn isn't > did we use ai? it's > can we even maintain this in 6mo? and if the answer's yes doesn't really matter who produced the code fr
Signed by mostly people at RedHat, which is owned by IBM, which makes Watson, which beat humans in Jeopardy in 2011.
> These are early days of AI-assisted software development.
Are they? Or is this just IBM destroying another acquisition slowly.
Meanwhile the Dotnet Runtime is fully embracing AI. Which people on the outside may laugh at but you have extremely talented engineers like Stephen Toub and David Fowler advocating for it.
So enterprises: next time you have an IBM rep trying to sell you AI services, do yourself a favor and go to any other number of companies out there who are actually serious about helping you build for the future.
And since I am a North Carolina native, here’s to hoping IBM and RedHat get their stuff together.
>> The tools will mature, and we can expect some to become safely usable in free software projects.
It should be possible to build a useful AI code generator for a given programming language solely from the source code for the language itself. Doing so however would require some maturity.
Using AI code generators. I have been able to get the code base large enough that it was starting to make nonsense changes.
However, my overall experience I have been thinking about how this is going to be a massive boon to open source. So many patches, so many new tools will be created to streamline getting new packages into repos. Everything can be tested.
Open source is going to be epicly boosted now.
QEMU deciding to sit out from this acceleration is crazy to me, but probably what is going to give Xen/Docker/Podman the lead.
qq
Would it make sense to include the complete prompt that generated the code with the code?
Including prompts would create transparency but still wouldn't resolve the underlying copyright uncertainty of the output or guarantee the code wasn't trained on incompatibly-licensed material.
You’d need to hash the model weights and save the seeds for the temperature prng as well, in order to verify the provenance. Ideally it would be reproducible, right?
Maybe 2 years ago. Nowadays LLMs call functions and use tools, good luck capturing that in a way that it's reproducible.
It would need to be more than that. A prompt for one model can have different results vs another. Even when the model has different treatment for inference, eg quantization, the same prompt for the unquantized and quantized model could differ.
Even more so, when you come back to understand in a few years, the model will no longer be available
One of several reasons to use an open model even if it isn't quite as good. Version control the models and commit the prompts with the model name and a hash of the parameters. I'm not really sure what value that reproducibility adds though.
I use LLMs for generating documentation- I write my code, and ask Claude to write my documentation
I think you are doing it the wrong way around.
Maybe not. I trust Claude to write docs. I don’t trust it to write my code the way I want.
Coolest thing I've seen today.
I'm interested to see how this plays out. I'd like a similar policy for my projects, but also a similar policy/T&C that prohibits the crawling of the content too.
Only way to prohibit crawling is to go back to invite only, probably self-hosted repositories. These companies have no shame, your T&Cs won't mean anything to them and you have no way of proving they violated them without some kind of discovery into their training data.
That’s very conservative.
Oi
This is a "BlockBuster laughs Netflix out of the room" moment. I am a huge fan of QEMU and used it throughout my career. The maintainers have every right to govern their project as they see fit. But this is a lot of mental gymnastics to justify clinging to punchcards in a world where we now have magnetic tape and keyboards to do things faster. This tech didn't spawn weeks ago. Every major project has had at least two years to prepare for this moment.
Pull your pants up.
2 years isn’t that long. It took the Linux kernel 10 years to start accepting code written in Rust. This isn’t quite the same as the typical frontend flavor-of-the week JavaScript library.
> This is a "BlockBuster laughs Netflix out of the room" moment
I'm not sure that's the dunk you think it is. Good for Netflix for making money, but we're drowning in their empty slop content now and worse off for it.
Who is forcing you to watch slop? And mind you, there was a TON of garbage at any local Blockbuster back in the day, with the added joy of having to go somewhere to rent it, being slapped with late and rewind fees or not even have availability of what you want to watch.
Choice is good. It means more slop, but also more gold. Figure out how to find the gold.
[dead]
You're so dramatic. Like they said in the declaration, these are the early days of AI development and all the problems they mention will be eventually resolved so they have no problem taking a backseat while things sort themselves out and I respect that choice.
When will people give up this archaic practice of sending patches over emails?
When enough people don't want to do it anymore. Feel free to step up, live with email patches, and add to the numbers of those who don't like it and say so.
Why is it archaic if it works? I get there might be other ways to do patch sharing and discussion but what exactly is your problem with email as a transport?
You might as well describe voice and ears as archaic!
Sending patches over email is basically a filter for slop. Stops the low effort drive by PRs and anyone who actually wants to invest some time in to contributing won't have a problem working out the workflow.
AI can figure out how to send a patch via email a lot faster than a human.
likely when it stops being a useful way to cut out noise
So essentially it’s “let us cover ourselves by saying it’s not allowed” and in practice that means not allowing code that a human thinks is AI generated code.
Universities have this issue too, despite many offering students and staff Grammarly (Gen AI) while also trying to ban Gen AI.
Sounds like a good idea to ensure developers are owning the code they submit rather than hiding behind "I don't know why it does that, ChatGPT wrote it".
Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.
> Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.
The actual policy is "don't use AI code generators"; don't try to weasel that into "use it if you want to, but if the person on the other side can tell". That's effectively "it's only cheating if you get caught".
By way of analogy, Open Source projects also typically have policies (whether written or unwritten) that you only submit code you are legally allowed to submit. In theory, you could take a pile of proprietary reverse-engineered code that you have no license to, or a pile of code from another project that you aren't respecting the license of, and submit it anyway, and slap a `Signed-off-by` on it. Nothing will physically stop you, and people might not be able to tell. That doesn't make it OK.
The way I interpret it is that if you brainstorm using ChatGPT but write your own code using the ideas created in this step that would be fine, the reviewer wouldn't suspect the code of being AI generated because you've made sure it fits in with the project and actually works. The exact wording here is that they will reject changes they suspect of being AI generated, not that you can't have read anything AI generated in the process.
Getting AI to remind you of the libraries API is a fair bit different to having it generate 1000 lines of code you have hardly read before submitting.
What if the code is AI generated and the developer that drove it also understands the code and can explain it?
Well, then you’re not allowed to submit it. This isn’t hard.
Well I guess the key difference is code is deterministic, that is whether an paper accomplishes it's goals is somewhat subjective but with code its an absolute certainty.
I'm sure that if a contributor working on a feature used cursor to initially generate the code but then goes over it to ensure it's working as expected that would be allowed, this is more for those folks that just want to jam in a quick vibe-coded PR so they can add "contributed to the QEMU project" on their resumes.
You'd be wrong, the linked commit clearly says that anything written by, or derived from, AI code generation is not allowed.
It more like a clarification.
The rules regarding the origin of code contributions are rather strict, that is, you can't contribute other people code unless you can make sure that the licence is appropriate. A LLM may output a copy of someone else code, sometimes verbatim, without giving you its origin, so you can't contribute code written by a LLM.
AI generated code is generally pretty good and incredibly fast.
Seeing this new phenomenon must be difficult for those people who have spent a long time perfecting their craft. Essentially, they might feel that their skillsets are being undermined. It would be especially hard for people who associate a lot of their self-identity with their job.
Being a purist is noble, but I think that this stance is foolish. Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.
It's not a stance about the merits of AI generated code but about the legal status of it, in terms of who owns it and related concepts.
Yes the reasoning behind the decision is clear and as you described. But I would also make the point that the decision also comes with certain consequences, to which a discussion about merits is directly relevant.
> Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.
Who is going to "overtake" QEMU, what exactly does that mean, and what will it matter if they are?
OP said people. QEMU is not people.
We're talking about a decision that the people behind QEMU made that affects people, to which the consequences of made the discussion of merits "directly relevant".
If we're talking about something that neither involving QEMU nor the people behind it, where is the relevance? It's just a rant on AI at that point.
I wish people would make distinction regarding the size/scope of the AI-generated parts. Like with video copyright laws, where a 5-second clip from a copyrighted movie is usually considered fair use and not frowned upon.
Because for projects like QEMU, current AI models can actually do mind-boggling stuff. You can give it a PDF describing an instruction set, and it will generate you wrapper classes for emulating particular instructions. Then you can give it one class like this and a few paragraphs from the datasheet, and it will spit out unit tests checking that your class works as the CPU vendor describes.
Like, you can get from 0% to 100% test coverage several orders of magnitude faster than doing it by hand. Or refactoring, where you want to add support for a particular memory virtualization trick, and you need to update 100 instruction classes based on straight-forward, but not 100% formal rule. A human developer would be pulling their hairs out, while an LLM will do it faster than you can get a coffee.
Not all jurisdictions are the US, and not all jurisdictions allow fair use, but instead have specific fair dealing laws. Not all jurisdictions have fair dealing laws, meaning that every use has to be cleared.
There are simple algorithms that everyone will implement the same way down to the variable names, but aside from those fairly rare exceptions, there's no "maximum number of lines" metric to describe how much code is "fair use" regardless of the licence of the code "fair use"d in your scenario.
Depending on the context, even in the US that 5-second clip would not pass fair use doctrine muster. If I made a new film cut entirely from five second clips of different movies and tried a fair use doctrine defence, I would likely never see the outside of a courtroom for the rest of my life. If I tried to do so with licensing, I would probably pay more than it cost to make all those movies.
Look up the decisions over the last two decades over sampling (there are albums from the late 80s and 90s — when sampling was relatively new — which will never see another pressing or release because of these decisions). The musicians and producers who chose the samples thought they would be covered by fair use.
It sounds like you're saying someone could rewrite Qemu on their own, with the help of AI. That would be pretty funny.
Given enough time, a monkey randomly types on typewriter can rewrite QEMU.
Qemu can make the choice to stay in the "stone age" if they want. Contributors who prefer AI assistance can spend their time elsewhere.
It might actually be prudent for some (perhaps many foundational) OSS projects to reject AI until the full legal case law precedent has been established. If they begin taking contributions and we find out later that courts find this is in violation of some third party's copyright (as shocking as that outcome may seem), that puts these projects in jeopardy. And they certainly do not have the funding or bandwidth to avoid litigation. Or to handle a complete rollback to pre-AI background states.