djoldman 9 hours ago

> There’s a strange disconnect in the industry. On one hand, GitHub claims that 20 million users are on Copilot, and Sundar Pichai says that over 25% of the code at Google is now written by AI. On the other hand, independent studies show that AI actually makes experienced developers slower.

From the study[0]:

> 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience.

This study continues to get a lot of airtime on HN and elsewhere. Folks probably should be skeptical of a study with the combination of a small number of subjects with a broad claim.

[0] https://arxiv.org/pdf/2507.09089

  • Fraterkes 8 hours ago

    Shouldn’t users be equally skeptical of claims by ai companies? Id argue things even out in that case?

  • jwitthuhn 9 hours ago

    Anyone pointing to that as proof that AI slows developers down has not actually read it. See appendix B

    "We do not provide evidence that: AI systems do not currently speed up many or most software developers"

  • ninetyninenine 9 hours ago

    Why is stuff like this making it to the front page?

    He Looks like he’s a typical software engineer with a very very generic opinion on AI presenting nothing new.

    The arrogance the article starts off with like where he talks about how much time he’s invested in AI (1.5 years holy cow) and how that makes him qualified to give his generic opinion is just too much.

    • tempodox 9 hours ago

      Who would be qualified to give an opinion then?

      • ninetyninenine 8 hours ago

        Tons of more qualified people. George Hinton for one. That’s not even the main issue though. I don’t care who he is.

        The point is this opinion is generic. It’s nothing new. It’s like someone stating “cars use gas, I’ve been driving for 1.5 years and I learned enough to say that cars use gas.”

        • tempodox 6 hours ago
          • ninetyninenine 5 hours ago

            You're dropping that link as if what he says should be dismissed.

            His opinion is different. Worth reading about. His expertise and knowledge is highly relevant. What he says is also extremely likely and true.

            I think the sheer amount of hype around AI has turned it into a hallucinating, brain damaged version of a human that everyone likes to scoff at. We're blinded by the significance of what happened here and also blinded at the rate at which this thing is improving in terms of raw intelligence.

            The LLM is a milestone in human history equivalent to landing on the moon. But thank excessive exposure to this stuff on social media to sort of dampen the significance.

toddmorey 8 hours ago

> For everything I don’t like doing, AI is phenomenally good. Take design, for instance.

I've seen this sentiment quite a bit; I think it's really baked into human psyche. I think we understate the importance of what we don't enjoy and perhaps overstate the importance of the tasks we do enjoy and excel at. It makes sense, we're defending ourselves and our investments in learning our particular craft.

smokel 10 hours ago

> The company that creates an AGI first will win and get the most status.

I doubt it. History has shown that credit for an invention often goes to the person or company with superior marketing skills, rather than to the original creator.

In a couple of centuries, people will sincerely believe that Bill Gates invented software, and Elon Musk invented self-driving cars.

Edit: and it's probably not even about marketing skill, but about being so full of oneself to have biographies printed and making others believe how amazing they are.

  • jacobedawson 9 hours ago

    Without sidetracking with definitions, there's a strong case to make that developing AGI is a winner takes all event. You would have access to any number of tireless human level experts that you could put to work at improving the AGI system, likely leading to ASI in a short amount of time, with a lead of even a day growing exponentially.

    Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.

  • amelius 10 hours ago

    A few decades ago I thought that the first person to create AGI would instantly receive a Nobel prize/Turing award.

    But my opinion on this has shifted a lot. The underlying technology is pretty lame. And improvements are incremental. Yes, someone will be the first, but they will be closely followed by others.

    Anyway, I don't like the "impending doom" feeling that these companies create. I think we should tax them for it. If you throw together enough money, yeah, you can do crazy things. Doesn't mean you should be able to harass people with it.

    • XCSme 9 hours ago

      I doubt LLMs will lead to AGI.

      Yes, it gets "smarter" each time, more accurate, but still lacks ANY creativity or actual thoughts/understanding. "You're completely right! Here, I fixed the code!" - proceeds to copy-paste the original code with the same bug.

      LLMs will mostly replace: - search (find information/give people expert-level advice in a few seconds) - data processing (retrieval of information, listening and react to specific events, automatically transforming and forwarding of information) - interfaces (not entirely, but mostly augment them, sort of a better auto-complete and intention-detection). - most content ideation and creation (will not replace art, but if someone needs an ad, a business card, landing page, etc., the AI will do a good enough job). - finding errors in documents/code/security, etc.

      All those use-cases are already possible today, AI will just make them more efficient.

      It will be a LONG time until AI will know how to autonomously achieve the result you want and have the physical-world abilities to do so.

      For AGI, the "general" part will be as broad as the training data. Also, now the AI listens too much to the human instruction and is crippled for (good) safety reasons. While we have all those limitations set, the "general intelligence" will still be limited, as it would be too dangerous to set zero limits and see where it goes (not because it's smart, but it's like letting a malware have access to the internet).

      • conception 6 hours ago

        Arguably hallucinating is a path to creativity. There are studies being done on having an llm hallucinate ideas and another validating them as a possible novel idea.

        LLMs probably alone won’t be agi but humanity’s intelligence is also not just the limbic system or the neocortex. Our brains are also various different styled tools interconnected to create a greater whole. To that end LLMs may be a key part of bringing together the tools we already have been built (computing, machine learning, robotics, etc) into a larger system that is agi.

        • XCSme 5 hours ago

          Oh, so LLMs would be just a part of the "AI". Good point.

    • smokel 9 hours ago

      > The underlying technology is pretty lame.

      This depends on the perspective. Take a step back and consider what the actual technology is that makes this possible: neural networks, the transistor, electricity, working together in groups? All pretty cool, IMHO.

  • begueradj 10 hours ago

    I agree. For example, electric cars were around already in the mid 1800s. But some people believe Elon Musk is the original inventor.

    • pineaux 9 hours ago

      Some people believe the earth is flat. I doubt that the invention of the electric car will be attributed to musk. The invention of the car is not attributed to Ford either...

netown 9 hours ago

> the people who gain the most from all these tools are not the developers—it’s all the people around who don’t write code

this insight stood out the most to me. i def agree, but what's interesting is the disconnect with the industry--it seems to be accepted rn that if coding is what ai is best at, developers must be the only ones that care, and that seems to have shown up in usage as well (i don't think i've seen much use of ai outside of personal use other than by developers, maybe i'm wrong?)

  • conception 6 hours ago

    I’ve been giving Cursor to non-developers and watching their eyes light up with its “magic”. Giving non-developers easy access to simple python/powershell/bash/etc scripting for business tasks is a huge area for low hanging fruit that AI is extremely good at.

lowsong 10 hours ago

> the future of software engineering will inevitably have more AI within it

Probably not. We're deep in the hype bubble, so AI is strongly overused. Once the bubble pops and things calm down, some use-cases may well emerge from the ashes but it'll be nowhere near as overused as it is now.

> AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status.

There's a built-in assumption here that AGI is not only possible but inevitable. We have absolutely no evidence that's the case, and the only people saying we're even remotely close are tech CEOs who's entire business model depends on people believing that AGI is around the corner.

  • rco8786 10 hours ago

    > We're deep in the hype bubble, so AI is strongly overused

    I don't think these things are really that correlated. In fact, kind of the opposite. Hype is all talk, not actual usage.

    I think this will turn out more like the internet itself. Wildly overhyped and underused when the dotcom bubble burst. But over the coming years and decades it grew steadily and healthily until it was everywhere.

    Agreed re: AGI though.

    • codingdave 10 hours ago

      That is not how the dotcom bubble burst. Internet usage was growing fast before, during, and after the bubble. The bubble was about silly investments into it that had no business model - that investment insanity is what burst, not overall usage.

      • rco8786 10 hours ago

        I think I am saying the same thing. My comment about "underused" should have been "underused relative to the investment dollars pouring in"

      • satyrun 9 hours ago

        Many of the business models were good too but they had the timing wrong.

        Petfoods.com IPO for about $300 million. $573 million adjusted for inflation.

        Chewy is at a 14 billion market cap right now.

        I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.

        • rco8786 9 hours ago

          > I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.

          Curious to hear more here. What is lazy about it? My general hypothesis is that ~95% of AI companies are overvalued by an order of magnitude or more and will end up with huge investor losses. But longer term (10+ years) many will end up being correctly valued at an order of magnitude above today's levels. This aligns perfectly with your pets.com/Chewy example.

    • tovej 10 hours ago

      Certain parts of what we call AI will definitely be used more in the future: facial recognition, surrogate models, video generation.

      I don't, however, see LLMs as consumer products being that prevalent in the future as currently. The cost of using LLMs is kept artificially low for consumers at the moment. That is bound to hit a wall eventually, at the very least when the bubble pops. At least that seems like an obvious analysis to make at this point in time.

      • 15155 6 hours ago

        > The cost of using LLMs is kept artificially low for consumers at the moment.

        If the results of current LLM performance is acceptable, costs to achieve these same results will inevitably go down as semiconductor process improvements bring markedly reduced operational expenses (power, density, etc.)

      • rco8786 10 hours ago

        The cost angle is interesting, I'm not close enough to the industry to say for sure. But from what I'm reading inference and tokens are only getting cheaper.

        Regarding usage - I don't think LLMs are going away. I think LLMs are going to be what finally topples Google search. Even my less technical friends and acquaintances are frequently reaching for ChatGPT for things they would have Googled in the past.

        I also think we'll see the equivalent of Google Adwords start to pop up in free consumer LLMs.

dude250711 10 hours ago

It could have been better if the entire codebase could always be provided to AI as the context. Otherwise, specifying exactly what you want is one step away from just doing it yourself.

> As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks.

I do not understand why they are not marketing some "GPT Middle Manager" to the executive boards so that they could cut that fat. Surely that is a huge untapped cost-cutting potential?

  • anonzzzies 10 hours ago

    Cannot be worse than almost all human managers, so agreed. I am a terrible manager myself; i'm a good ceo/cto, making profits and keeping things running for almost no money, but managing i'm terrible at. And I haven't seen many who couldn't be replaced by a piece of cardboard. There are exceptions, but AI's can just as well do terrible team management while keeping their upper managers/c-levels busy with nonsense documents as is the standard for humans too. Indeed yesterday I wrote on HN that this is what LLMs are VERY good for; generating ENORMOUS piles of paper to give to all types of (middle) management to make them feel valued.

  • justanotherjoe 10 hours ago

    The obvious next step is where you can easily put new knowledge inside the parameters of the model itself.

    I want the AI to know my codebase the same way it knows the earth is round. Without any context fed to it each time.

    Instead we have this weird Memento-esque setup where you have to give it context each time.

  • nikolayasdf123 10 hours ago

    they are already doing that. full-steam on. just look at HSBC made their goal to layoff middle-managers. and so does Google, Microsoft, others.

    • watwut 10 hours ago

      We already had that cycle with agile. I predict half baked models, chaos, then backslash with hiring even more managers combining models and management into one large innefective bundle.

      The ones profiting the most will be consultancies designed to protect the upper management reputation.

demirbey05 10 hours ago

> If an AI can replace these repeated tasks, I could spend more time with my fiancé, family, friends, and dog, which is awesome, and I am looking forward to that.

I could not understand this optimism, aren't we living in a capitalist world ?

  • shafyy 9 hours ago

    Exactly. I am yet to see the manager that says to their employees: "Ah nice, you became 10% more efficient using AI, from now on you can work 4 hours less every week".

  • _heimdall 10 hours ago

    I don't think its about capitalism, people have repeatedly shown we simply just don't like idle time over the long run.

    Plenty of people could already work less today if they just spent less. Historically any of the last big productivity booms could have similarly let people work less, but here we are.

    If AI actually comes about and if AGI replaces humans at most cognitive labor, we'll find some way to keep ourselves busy even if the jobs ultimately are as useless as the pet rock or the Jump to Conclusions Mat (Office Space reference for anyone who hasn't seen it).

    • chongli 10 hours ago

      I don’t think it’s that simple. Productivity gains are rarely universal. Much of the past century’s worth of advancement into automation and computing technology has generated enormous productivity gains in manufacturing, communication, and finance industries but had little or no benefit for a lot of human capital-intensive sectors such as service and education.

      It still takes basically the same amount of labour hours to give a haircut today as it did in the late 19th century. An elementary school teacher today can still not handle more than a few tens up to maybe a hundred students at the extreme limit. Yet the hairdressing and education industries must still compete — on the labour market — with the industries showing the largest productivity gains. This has the effect of raising wages in these productivity-stagnant industries and increasing the cost of these services for everyone, driving inflation.

      Inflation is the real time-killer, not a fear of idleness. The cost of living has gone up for everyone — rather dramatically, in nominal terms — without even taking housing costs into account.

      • _heimdall 3 hours ago

        Productivity gains aren't universal, agreed there for sure, though we have long since moved past needing to optimize productivity for the basics. Collectively we're addicted to trading our time and effort for gadgets, convenience, and status symbols.

        I'm not saying those are bad things, people can do whatever they want with their own time and effort. It just seems obvious to me that we aren't interested in working less over any meaningful period of time, if that was a goal we could have reached it a long time ago by defining a lower bar for when we have "enough."

    • AlecSchueler 9 hours ago

      > idle time

      But they're not talking about idle time, they're talking about quality time with loved ones.

      > Plenty of people could already work less today if they just spent less.

      But spending for leisure is often a part of that quality time. The idea is being able to work less AND maintain the same lifestyle.

      • _heimdall 4 hours ago

        > But they're not talking about idle time, they're talking about quality time with loved ones.

        I totally agree there, I wasn't trying to imply that "idle time" is a bad thing, in this context I simply meant its time not filled by obligations allowing them to choose what they do.

        > But spending for leisure is often a part of that quality time.

        I expect that varies a lot by person and situation. Some of the most enjoyable experiences I've had involved little or no cost; having a camp fire with friends, going on a hike, working outside in the garden, etc.

        • AlecSchueler 4 hours ago

          > I wasn't trying to imply that "idle time" is a bad thing

          I you, I just mean what they're talking about is also not idle time as it's active time. If they were replacing work with sitting around at home, watching TV or whatever, then it would be idle time and drive them crazy no doubt. But spending time actively with their family is quite different, and would give satisfaction in a way that work does.

          > I expect that varies a lot by person and situation.

          Indeed. Spending isn't an inherent part of leisure. But it can be a part of it, and important part for some people. Telling them they could have more free time if they just gave up their passions or hobbies which cost money isn't likely to lead anywhere.

    • smokel 10 hours ago

      It's slightly more complicated than that. If people work less, they make less money, and that means they can't buy a house, to name just one example. Housing is not getting any cheaper for a myriad of reasons. The same goes for healthcare, and even for drinking beer.

      People could work less, but it's a group effort. As long as some narcissistic idiots who want more instead of less are in charge, this is not going to change easily.

      • smartmic 9 hours ago

        Yes, and now we have come full circle back to capitalism. As soon as a gap forms between capital and untapped resources, the capitalist engine keeps running: the rich get richer and the poor get poorer. It is difficult or impossible to break out of this on a large scale.

        • pineaux 9 hours ago

          The poor dont necessarily get poorer. That is not a given in capitalism. But at some point capitalism will converge to feudalism, at that point, the poor will become slaves.

          And if not needed, culled. For being "unproductive" or "unattractive" or generally "worthless".

          That's my cynical take.

          As long as the rich can be reigned in in a way, the poor will not necessarily become poorer.

          • shafyy 9 hours ago

            In neoliberal capitalism they do, though. Because companies can maximize profits without internalizing external costs (such as health care, social welfare, environmental costs).

  • aredox 9 hours ago

    It is indeed completely stupid: if he can do that, others can too, which means they can be more productive than he is, and the only way he would spend more time with his fiancé, family, friends, and dog is by becoming quickly unemployed.

  • deadbabe 9 hours ago

    Yes this is what people constantly get wrong about AI. When AI starts to replace certain tasks, we will then create newer, larger tasks that will keep us busy, even when using AI to its full advantage.

    • balfirevic 8 hours ago

      Do you expect AI to stop becoming more capable before it can do every economically useful task better than any human?

      • deadbabe 6 hours ago

        No, I expect the economy to continually expand.

        • balfirevic 6 hours ago

          If so, then there will be no place for humans in that economy - except for recreational purposes - regardless if its expansion.

          • deadbabe 3 hours ago

            But then who will be the consumer?

    • demirbey05 9 hours ago

      That's what I meant. I don't think boss wants you to pay same money with less work time.

    • pydry 9 hours ago

      or you'll be kicked out on to the street and shamed for being jobless.

  • tigrezno 9 hours ago

    Capitalism is ending with AGI/ASI, that's for sure.

    • XCSme 9 hours ago

      I am pretty sure UBI will be at least tested at a large scale in our lifetime.

      • shafyy 9 hours ago

        In the US, they can't even figure out universal healthcare, do you really think they are giving to go for UBI?

        • glhaynes 8 hours ago

          Huge numbers of desperate, armed, unemployed people have a way of focusing the will.

          • sp527 7 hours ago

            That's what Anduril is for

        • XCSme 9 hours ago

          I am from EU, so I can see it happening here, or in some smaller countries. Here, you already sort-of have an UBI, where you get enough social benefits to live off if unemployed.

nunez 6 hours ago

Great article.

About this:

> So with all these tools built for developers, I realized that the people who gain the most from all these tools are not the developers—it’s all the people around who don’t write code. It’s easier for customers to show what they really want, we can enter sales meetings with a PoC which makes the selling part easier, and product owners can generate a PoC to show the developers how they think and get quicker feedback.

This is, by far, the most harrowing outcome for software engineers from the proliferation of LLMs.

The code they generate is good enough (on first glance) to _finally_ convince non-technical people that they can finally ship software without those pesky software developers.

This was central to Eric Schmidt's speech to Stanford business school students last year [^0]:

> You understand how powerful that is.

> If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want as opposed to the programmers that work for me who don't do what I ask, right?

> The programmers here know what I'm talking about.

> So imagine a non-arrogant programmer that actually does what you want and you don't have to pay all that money to and there's infinite supply of these programs.

> That's all within the next year or two.

Non-technical business people hold the corporate wallet almost everywhere, even in SV.

Many of them terminally consider software engineering as a means to an end, despite our best efforts to convince them otherwise.

I'm not in that social stratus, but if I had to guess: they know that LLM-generated code can be buggy and introduce loads of slop, but they also know that it takes `<n` developers to maintain all of that versus the `n` developers it takes to generate _and_ maintain software today.

While I'm at it, many of them are extremely excited about offshoring that maintainenance even more aggressively or, in this administration, leveraging H1-B (or other visa'ed) labor to do so (or perhaps not, as I'll explain in a bit).

To those people, the ideal end result of all of this is a small group of architects/10x developers that are reviewing and essentially project managing a literal army of contractors and offshore labor that is pumping out LLM-generated products faster than ever and a slightly larger, but still small, group of SRE-like senior engineers administrating it all in a not-too-dissimilar way.

Since this is going to obviously be more-or-less career-ending hundreds of thousands of people in the medium term, this will, in their (hypothetical) minds, spurn a gig economy of replaceable developers maintaining LLM-produced code. No need to compete for H1-Bs when you have disposable labor right at home.

Actually, it's worse than that. From the article:

> For everything I don’t like doing, AI is phenomenally good. Take design, for instance. I’ve used Lovable and Figma to generate beautiful UIs and then copied the HTML to implement in an Elixir dashboard, and the results have been stunning. I also use AI when I write articles to help with spelling and maintaining a clear narrative thread. It’s rare, but sometimes you get lucky and the AI handles a simple task for you perfectly.

This will apply to _almost everyone_ that is not serving a core "front-of-house" business function, not just engineering. Designers, technical writers and content marketers are already getting pummeled by LLM and stable diffusion, and that's unlikely to reverse now that these are getting even better at those tasks.

I hope I'm dead wrong about all of this and that the future of AI in software is just for supercharging developers.

Nonetheless, all of this has been making me really sad, tbh. I'm a sales engineer, so we're, fortunately, much less affected by all of this as long as sales remains a people-first career. However, I love writing software by hand ("artisanally", as it's now, depressingly, labelled). It's what made me fall in love with my career, and it's now being reduced to worthlessness by our own hand. Thanks to Big AI, whenever I open vim to write some code, I'm now forced to think "should I just hand this over to Claude Code?"

[^0]: https://news.ycombinator.com/item?id=41263143

sp527 7 hours ago

> This is why Python or JavaScript are great languages to start with (even if JavaScript is a horrible language)

The author was hemorrhaging credibility all along the way and then this comment really drove home what he is: a bike shedder who probably deliberately introduces complexity into projects to keep himself employed. If you read between the lines of this post, it is clearly a product of that mindset and motivation.

'AI is only good at the simple parts that I don't like, but it's bad at the simple parts I do like and that are my personal expertise and keep me employed.'

Yeah okay buddy.