chrisco255 a day ago

AI does 0% of my work and we are actively hiring. As someone mentioned on another AI thread, if AI is so good why aren't people just doing 15 PRs a day on open source projects like Node.js, React, Kubernetes, Linux, Ansible, etc?

AI is sometimes a productivity booster for a dev, sometimes not. And it's unpredictable when it will and won't be. It's not great at giving you confidence signals when you should be skeptical of its output.

In any sufficiently complex software project, as much of the development is about domain knowledge, asking the right questions, balancing resources, guarding against risks, interfacing with a team to scope and vet and iterate on a feature, managing resources, analyzing customer feedback, thinking of new features, improving existing features, etc.

When AI is a productivity booster, it's great, but modern software is an evolving, organic product, that requires a team to maintain, expand, improve, etc. As of yet, no AI can take the place of that.

  • projectazorian 8 hours ago

    > if AI is so good why aren't people just doing 15 PRs a day on open source projects like Node.js, React, Kubernetes, Linux, Ansible, etc?

    Because those projects are mature and have a very high bar for contributions, so aren't a good fit for AI?

    Opening a PR on Linux is very different from opening a PR on a company's non-critical-path CRUD service.

    • jimbokun an hour ago

      I suspect the vast majority of day to day software development is new features and maintenance on large projects, rather than greenfield development.

  • beambot a day ago

    You don't use any AI - drafting documentation, write boilerplate code, transcribing meetings, simplifying team communications, searching product documentation, alternative to Google or StackOverflow, creating presentations, as a brainstorming partner? I would consider all of these "work".

    If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.

    • chrisco255 8 hours ago

      LLM does 0% of my work, don't know what to tell you. LLMs are like 3 years old, I learned to do everything I know without LLMs, how is that hard to believe? How do you think all the software you use everyday, including this site itself, was written? 99.99% of it without any LLMs at all.

      Do I use LLMs as an alternative to Googling? Absolutely. That doesn't mean AI is doing my job. Google and Stack Overflow also do 0% of my job. It's great as a reference tool. But if you're going to be that pedantic, we've got to count any help I receive from any human or tool as doing some % of my job. Do I count the open source software I build on? Do I count Slack as doing some % of my job since I don't have to go into the office and interface with everyone face-to-face? Does Ford get some of the credit for building the vehicle that gets me to the office?

      Have I used a meeting transcription tool? Occasionally, yeah. That doesn't mean it does any part of my work. My job was never to transcribe meetings. Do I use it to brainstorm? No, I've found it's fairly useless for that. Do I use it to create presentations? No, I just write my slides the old fashioned way.

    • thisisit 11 hours ago

      Is that so hard to believe? My work uses proprietary language. Something like ABAP for SAP [1]. AI has ingested lot of documentation available on the internet. But it cannot tell the difference between versions. So, AI code often has correct but deprecated functions.

      And don't get me started on the "time savings" for boiler plate documentation. It messes up every time.

      [1]: https://en.wikipedia.org/wiki/ABAP

    • ponector 18 hours ago

      How can AI search into documentation, if the documentation is a thousands of obsolete and contradicting Jira tickets, few outdated Confluence pages with mail attachments and handful of excel files on SharePoint?

      • dent9 15 hours ago

        Supposedly there is a way to get an AI to do exactly this, we have it slated as an "intern project". Which feels ironic in itself. Using an intern to figure out how to get an AI to rectify our Jira and train on our confluence to help us and our users

      • grim_io 16 hours ago

        Using AI to distill all of that sprawling and contradicting documentation is a great use case.

        Grounding it in the reality of the current implementation for the extra cherry on top.

      • jckahn 17 hours ago

        With the necessary MCP servers.

        • ponector 15 hours ago

          Good luck to receive security clearance for that. Even Cursor is not allowed, though everyone is using it with private account.

          • ffsm8 15 hours ago

            No, even if you got clearance... What's that gonna help with? The point was that the jira tickets are obsolete and likely contradict each other with changing requirements over time. More advanced tooling might be able to guess from looking at the git history and double-checking via linked tickets etc, but there is currently no tooling available that actually does this, today.

            And that's coming from someone that has repeatedly gone on record saying "my expectation for our industry is a gigantic contraction because of LLM", ...but this isn't a scenario that's plausible with current models.

    • JohnFen 14 hours ago

      Very few people where I work are using genAI for any of those things at all.

    • sublinear a day ago

      Are you saying everyone who isn't barely starting their career is a genius? In the current state of things I'd gladly take mediocre work from a human over slop from an AI.

      • hoherd 21 hours ago

        Seriously this. Doing code reviews on LLM created code is so frustrating. If the code was submitted by a junior engineer I could get on zoom with them and educate them, which would make them a better team mate, advance their career goals, and make the world slightly better. With AI created code the review process is a series of tiny struggles to dig up out of the hole the LLM created and get back to baseline code quality, and it'll probably be the same Sisyphean struggle with the next PR.

        • soraminazuki 15 hours ago

          I had to review code that couldn't even do a straightforward map, filter, and reduce properly. But with management pushing hard for AI use, I feel powerless to push back against it.

        • antisol 20 hours ago

          Ha! Not just "the next PR", in my experience about 30% of the time you tell it "hey this slop you gave me is horribly broken because <reason>", and it says "you're absolutely right! I totally 100% understand the problem now, and I'll totally 100% fix that for you right now!", and then proceeds to deliver exactly the same broken slop it game me before.

          • jimbokun an hour ago

            Interesting, isn’t it?

            It knows that the apologetic tone and acknowledging understanding of your critique is the most probable response for it to generate. But that’s very different from actually understanding how it should change the code.

      • revskill 19 hours ago

        You have prompt skill issue.

        • ath3nd 17 hours ago

          The only study currently trying to measure productivity of experienced devs using LLMs showed they suffer a 19% decline in productivity.

          https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

          Since that study demonstrated that experienced developers currently suffer a decline in their productivity when using LLMs, it's perfectly likely that less experienced/junior developers who normally will struggle with syntax or simple tasks like organizing their code are the ones experiencing the boost of productivity from LLMs.

          Thus, it seems the devs benefitting the most from LLMs are the ones with the skill issue/more junior/early in their career.

          Which group do you belong to?

          • revskill 17 hours ago

            No, it's just logical, LLM is a useful tool, those experienced are just code monkeys that whey they got stuck.

            • ath3nd 17 hours ago

              > No, it's just logical, LLM is a useful tool

              How open are you to the possibility that it's the other way around? Because the study suggests that it's actually junior code monkeys that benefit from LLMs, and experienced software engineers don't instead get a decline of their productivity.

              At least that's what the only available study so far shows.

              That's corroborated with my experience mentoring juniors, the more they struggle with basic things like syntax or expressing their thoughts clearly in code, the more benefit they got from using LLM tools like Claude.

              Once they go mid-level and above, the LLMs are a detriment to them. Do you currently get big benefit from LLMs? Maybe you are more early in your career?

              • bakuninsbart 11 hours ago

                I think you are making a couple of very good points getting bogged down in the wrong framework of discussion. Let me rephrase what I think you are saying:

                Once you are very comfortable in a domain, it is detrimental to have to wrangle a junior dev with low IQ, way too much confidence but encyclopediac knowledge of everything instead of just doing it yourself.

                The dichotomy of Junior vs. Senior is a bit misleading here, every junior is uncomfortable in the domain they are working in, but a Senior probably isn't comfortable in all domains. For example, many people with 10+ SE experience I know aren't very good with databases and data engineering, which is becoming an increasingly large part of the job. For someone who has worked 10+ years on Java Backends, now attempting to write Pythin data pipelines, Coding Agents might be a useful tool to gap that bridge.

                The other thing is creation vs. critique. I often let my code, writing and planning be rewiewed by Claude or Gemini, because once I have created something, I know it very well, and I can very quickly go through 20 points of criticism/recommendations/tips and pick out the relevant ones. - And honestly, that has been super helpful. Using it that way around, Claude has caught a number of bugs, taught me some new tricks and made me aware of some interesting tech.

              • revskill 16 hours ago

                Those "experienced" actually are just senior code monkeys if u ask me, it's trivial right ? I don't assume the reason why, but it's just illogical for a junior to get benefits and the seniors don't. The wrong ones here is the "experienced".

                I know how to use the AI tools for my purpose (that's why i use them), and of course, to make the impossible possible. Even if i failed to do so, it's not the decrease in productivity because without them, i don't think i can do better than the LLM.

                • soraminazuki 12 hours ago

                  The responses in this thread captures the absurdity of the AI hype so well that it's satirical, even. Putting all blame of AI deficiency on "bad prompting," denial of concrete evidence, and the refusal to provide one either is a recurring pattern in these discussions. The repeated angry name-calling towards experienced developers who failed to uphold your beliefs is the cherry on top.

                • ath3nd 16 hours ago

                  > Those "experienced" actually are just senior code monkeys if u ask me, it's trivial right

                  Well, it seems you are not open for discussion. There is no reason to disparage the senior devs that participated in the study just because you don't like the results of the study. But the study happened, and it is clear: experienced developers are the ones that suffered from using LLMs.

                  > but it's just illogical for a junior to get benefits and the seniors don't

                  Experienced car drivers won't benefit from a youtube tutorial how to drive, junior car drivers might. That's similar to junior developers being potentially the ones who can benefit from the basic things that an LLM can help you with, e.g helping you with syntax and structure your thoughts and write a scaffold to get you started. Those are concerns that experienced developers don't need help with, similarly how experienced drivers don't need youtube tutorials how to shift a gear. There is nothing illogical in that premise. Do you agree?

                  > i don't think i can do better than the LLM

                  I most certainly can tell you that there are 1000s of developers that can do infinitely better than any of the current LLMs, and those developers are fairly often senior. It seems like a skill issue you mentioned in the beginning of your post might actually be on your side.

                  • revskill 16 hours ago

                    Productivity could just be simple automation. U just describe one part of the whole process. My point stands still. If u cannot get llm to benefit u, u are the problem.

                    • skydhash 13 hours ago

                      That's like saying if you cannot get a boat to fly, you're a bad pilot.

                • sublinear 8 hours ago

                  You're gonna need to define "code monkey".

                  My understanding is that a code monkey just does what they're told. All the planning and behind the scenes negotiations that the senior devs and management do is completely opaque to them.

    • bluefirebrand a day ago

      > If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous

      AI was doing 0% of my work 10 years ago too, why should I be any less effective without it now?

      You think I'm behind the curve because I'm not buying into the AI craze?

      Ok. What's so important about being on the curve anyways, exactly? My boss won't pay me a single cent more for using AI, so why should I care?

      • handoflixue 20 hours ago

        We used to do all our math on slide rules. They're just as effective as they always were.

        But when you're being graded on a curve, standing still can still mean falling behind.

        Which isn't to say that AI is definitively ahead of the curve; I think we're a bit early for that. But as actual answers to your actual questions - it's important because if everyone else gets ahead of you, your boss will STOP paying you

        (and if you're "good at AI", you can at least make bank until the bubble bursts)

    • ath3nd 19 hours ago

      > You don't use any AI - drafting documentation, write boilerplate code, transcribing meetings, simplifying team communications, searching product documentation, alternative to Google or StackOverflow, creating presentations, as a brainstorming partner? I would consider all of these "work". If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.

      There are reasons that seasoned OSS developers reject AI PRs: https://news.itsfoss.com/curl-ai-slop/ (like the creator of curl). Additionally, the only study to date currently measuring the impact on LLMs on experienced developers found a modest 19% decline in productivity when using an LLM for their daily work.

      https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

      Now we can ponder behind the reasons that the study showed experienced developers get a decrease of productivity, and you anecdotally experience a boost of "productivity", but why think about things when we can ask an LLM?

      - experienced developers -> measured decrease of productivity

      - you -> perceived increase of productivity

      Here is what ChatGPT-5 thinks about the potential reason (AI slop below):

      "Why You Might Feel More Productive

      If senior developers are seeing a decline in productivity, but you are experiencing the opposite, it stands to reason that you are more junior. Here are some reasons why LLMs might help junior developers like you to feel more productive:

      Lower Barrier to Entry

      - LLMs help fill in gaps in knowledge—syntax, APIs, patterns—so you can move faster without constantly Googling or reading docs.

      - Confidence Boost You get instant feedback, suggestions, and explanations. That can make you feel more capable and reduce hesitation.

      - Acceleration of Learning You’re not just coding—you’re learning as you go. LLMs act like a tutor, speeding up your understanding of concepts and best practices.

      - More Output, Less Friction You might be producing more code, solving more problems, and feeling that momentum—especially if you are just starting your coding journey."

pram a day ago

We are still hiring engineers. Everyone has a paid Cursor sub, and some people use Claude Code. We also have Claude in GitHub doing automatic PRs.

It’s mostly seen as a force multiplier. Our platform is all Java+Spring so obviously the LLMs are particularly effective because it’s so common. It hasn’t really replaced anyone though, also because it’s Java+Spring so most of our platform is an enormous incomprehensible mess lol

  • jimbokun an hour ago

    I work on a lot of Java Spring code.

    Agreed that it’s inherently a bunch of barely comprehensible slop that the AI slop probably fits right in, lol.

827a a day ago

Its a nice productivity and capability boost that feels on the same magnitude as, for example, React. The "dream" of it being able to just take tickets and agentically get a PR up for review is possible for ~5% of tickets. That goes up to ~10% if your organization has no standards at all, including even a self-serving standard like "at least make sure the repository remains useful to future AI usage".

My organization would still hire as many software engineers as we could afford.

- Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

- Using it for exploratory high level research and summarization into unfamiliar repos is pretty nice.

- Very rarely does AI write code that I feel would last a year without needing to be rewritten. That makes it good for things like knocking out a quick script or updating a button color.

- None of them actually follow instructions e.g. in Cursor rules. Its a serious problem. It doesn't matter how many times or where I tell it "one component per file, one component per file", all caps, threaten its children, offer it a cookie, it just does whatever it wants.

  • torginus a day ago

    > Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

    I wonder if we are going to pay for that, as a society. The number of times I went there, asking some tricky question about a framework, and have the actual author or one of the core contributors answer me was astonishing.

    • nicbou 19 hours ago

      As I wrote in a root comment, it’s decimating the traffic of informational websites. We will lose a lot of those websites that produced high-effort, simple-conclusion information. Who will bother with in-depth reviews if someone else gets the sale? Who will patiently answer questions if no one asks anymore?

      I think that a certain kind of craftsmanship will be lost.

      • jimbokun an hour ago

        AI companies are eating their seed corn.

    • jimbokun an hour ago

      We are losing the next generation of content to train future AIs on.

    • piva00 19 hours ago

      I've been using SO since it first launched, and through time it has changed a lot already. It used to be simple to ask questions and get answers, when it grew and the huge influx of questions required moderation it was a nice smooth change but over time the pearl clutching of mods to mark many reasonable questions as irrelevant started to grip the usefulness.

      I used to answer a lot of the basic questions just to help others as I've felt I had been helped, the moderation shift applying more and more rules started to make me feel unwelcomed to ask questions and even to answer. I do understand why it happened, with the influx of people trying to game the platform to show off in their resumés they were at the "top" of whatever buzzword was hot in the industry at the time but it still affected me as a contributing user out of kindness.

      By 2018 I would not even login to vote or add comments, and I feel it was already going on a slow downhill path, and LLMs will definitely kill it.

      We will definitely suffer, SO has been an incredible resource to figure out things not covered well in documentation, I remember when proper experts (i.e. maintainers of libraries/frameworks) would jump in to answer about a weird edge case, or clarify the usage of a feature, explain why it was misuse, etc.

      Right now I don't see anything else that will provide this knowledge to LLMs, in 10-20 years time there will be a lot missing in training datasets, and it will be a slow degradation of knowledge available in the open for us all to learn from.

      • torginus 19 hours ago

        Yeah, SO had many problems, I'd argue most stemming from the points system that:

        - Made people treat it like a contest and tried to game it

        - Obscure, difficult questions, with obscure, difficult answers barely being valued, while 'How do I make a GET request in node' going gangbusters.

        • kldg 17 hours ago

          as an aside, the ranking/rating systems are largely why I quit using social media a long time ago and stick to a small chatroom of people I know. it's relatively tolerable here where scores aren't next to posts (and the user base generally tolerant, lets whoever say their piece and move on), but I use following ublock filter to hide the score stuff:

          news.ycombinator.com###karma

          news.ycombinator.com##.score

          news.ycombinator.com##td:has-text(/^karma:$/):upward(tr)

          I've thought about removing the graying feature and maybe doing a random sort of replies, but haven't accumulated enough spite yet.

          • jimbokun an hour ago

            I’m addicted to Reddit but confess it’s mostly about saying something clever or funny to get high scores, not necessarily insightful or informative.

            Still better than X though.

  • scubadude a day ago

    > Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

    If, like the meme, you just copied from SO without using your brain then yes AI is comparable.

    If you appreciate SO for the discussion (peer review) about the answers and contrasting approaches sometimes out of left field, well good luck because AI can't and won't give you that.

    • jimbokun an hour ago

      The give and take of contrasting views is certainly lost by LLMs, which create the illusion of a consensus answer to any question.

    • jansan a day ago

      There is some very deep knowledge in SO's comments and some lower rated replies. But I suspect that is not what was making Stackoverflow so popular.

      • polotics 17 hours ago

        I suspect the new Stack Overflow is Discord channels, at least for some platforms/frameworks. Too bad the free-form non-threaded discussion format of Discord makes things so hard to follow.

        • jansan 16 hours ago

          If a company uses discord for support, then I am not using their product, unless it is a computer game. It is that simple.

    • AndrewDucker 19 hours ago

      Yes, very much this. I liked having 5 different approaches, and discussions of which one works best, rather than having an AI select one to use.

codingdave 17 hours ago

I'm not working right now... kinda waiting to see if the AI hype dies off before re-entering the industry. Because the last 2 teams of execs I've worked under both went "all in" on AI, asking everyone to use it as much as possible. Both led us down paths that made no sense from a product perspective. Both failed. It is the only time in my career that I've seen an entire exec team get fired all at once, and it happened twice. I know this is one anecdote, and a rare one, but it left me uninspired to just go do it again, not until I find leadership who has a reasonable perspective on AI. To me, that means treating it is just one tool among many, to be used when it is the best tool for a job, and only then.

So to answer the original question of how my morale is? It is non-existent. I am quite open to fixing that, but haven't seen much that indicates now is the right time to search for something new.

al_borland 18 hours ago

Our last CIO bought into all the AI hype early on and pushed it hard, before we were even allowed to use it. It was an odd time. We’d go to a town hall and get told that AI is going to change everything, and then get an email telling us we weren’t allowed to use it.

Now that we can use Copilot, we have a new CIO and I don’t hear about it so much. There is still some AI hype, but it’s more about how it’s being used in our products, rather than how to use it internally to do the work.

Apparently sometime in the next year we’re getting a new version of Jira with some AI that can do user stories on its own, but I don’t see that changing much of anything.

The bottleneck has rarely been the actual writing of code, it’s been people making decisions and general bureaucracy. AI isn’t solving that. Copilot has also not impressed anyone on my team. As far as the code we work on, it’s pretty bad. There are a few niche things it helps with, mostly writing queries to pull values out of complex json. That saves a little time, but hardly 30-50%. More like 1-2%.

Management stopped giving us new people, while pressuring us to do more, for many years now. This was a trend long before AI and I haven’t noticed any major change. I’d say it’s been this way for over 10 years now, ever since they had the realization that tasks could be automated.

  • binary132 an hour ago

    Some people really bought into the meme where AI was going to replace everyone and only those aggressively pimping it would be able to keep their jobs.

  • ryandrake 12 hours ago

    For most senior software engineer roles at companies I've seen, the act of actually thinking about, designing, and writing code is probably 25% or less of your job. There's so much other non-development stuff that goes on. Advocating for features, writing boilerplate docs, seeking stamps of approval for this and that, reviewing other people's work, navigating internal process (your release is gated by Deployment System X which is waiting for Privacy and Legal sign-off), interviewing, writing down what work you've done for your next annual review with your manager, and of course meetings, meetings, meetings. Even if AI does speed up coding by 30-50%, that's maybe 10-15% of actual savings.

YZF a day ago

- No major change in hiring due to AI.

- A lot of our code base is very specialized and complex, AI still not good enough to replace human judgement/knowledge but can help in various ways.

- Not yet clear (to me anyways) how much of a productivity gain we're getting.

- We've always had more things we want to do than what we could get done. So if we can get more productivity there's plenty of places to use it. But again, not clear that's actually happening in any major way.

I think the jury is still out on this one. Curious what others will say here. My personal opinion is that unless AI gets smart enough to replace more experienced developer completely, and it's far from that, then I'm quite sure there's not going to be less software jobs. If AI gets to a point where it is equal to a good/senior developer we'll have to see. Even then it might be that our jobs will just turn into more managing AI but it's not a zero sum game, we'll do more things. Superintelligence is a different story, i.e. AI that is better than humans in every cognitive aspect.

roarcher 20 hours ago

I recently used Claude to help me understand a math-dense research paper. It was useful for answering general questions about the structure of the algorithm, where to find information in the paper, and gain a high level/intuitive understanding of how the algorithm worked. It was absolutely abysmal at implementing the code, and would regularly make things up when I probed it about subtleties in the math.

Overall, it sped up my learning greatly, but I had to verify everything it said and its code was a mess. It's a useful tool when used appropriately but it's not threatening my job anytime soon.

  • DanielHB 19 hours ago

    I mostly used it for exploratory analysis of tools/libraries I am not familiar with, where it points me the parts of the API docs I need to lookup to verify the output. Then I manually adapt or rewrite it.

    So in the end I use it fairly often when setting up new things (new infra, new files, new tools, new functions, etc). Although the time it saves is not coding time, but googling/boilerplating time. But in practice I work in a well established project where I rarely do this kind of thing (I don't think I even created a new file in the project last week).

    If I am already familiar with the tool/library I almost always skip it (occasionally autocomplete is useful, but I could easily live without it). Occasionally I used for small self-contained snippets of code (usually no more than a single function). Last one I remember was some date formatting code.

    • skydhash 13 hours ago

      Some project do have nice documentation that is a pleasure to use. Laravel's documentation, Postgres'one,... Even if I can get an answer faster with google search and now LLMs, I just open the main documentation site and spend an extra 5 minutes looking around.

      • DanielHB 13 hours ago

        I mean it more points me about stuff I don't know that I don't know.

        For example, I use it quite a lot when setting up new terraform configuration on resources I am not super familiar with, it often points me out to resources and options I didn't know existed. Of course I always look up everything it spits out (it pretty much never outputs something that actually runs anyway and even when it does it doesn't work out of the bat).

        But once the thing is set up, it is almost useless to use it to make small changes.

        • skydhash 13 hours ago

          Maybe I lack a sense of rush, but I usually take the slow lane for these kind of work. Actually reading through the whole documentation and taking notes. Maybe do some experiment. I'm wary of things going wrong and me not being able to give a clear and short explanation on the cause. If something is important for me to do, I may as well do it well (or explain the tradeoffs that makes it a hacky solution instead).

          • jimbokun an hour ago

            It can be hard to explain to people that slowing down to make sure something is done right, can make the overall project go much faster.

  • fiftyacorn 19 hours ago

    I was on reddit/math and someone was talking about how they were trying to get AI to create there final year thesis from uni. They tried 4-5 different AI solutions and reckoned they got 40-60% of the work from AI but it couldnt conclude the project. A few others on the thread had said the same

    Doesnt mean it wont get there - just that it isnt there yet

  • freilanzer 20 hours ago

    > I had to verify everything it said

    Letting it condense something like a paper and checking it afterwards might be a good learning exercise.

ai_assisted_dev a day ago

I have been in software for 20 years, and was just about to quit 2-3 years ago because of how mundane things became. And now I am actually loving it again because of AI. I'd say, AI writes 95% of my code, and I use it for 75% of the decisions during working on a project.

I am under MUCH more pressure to deliver more in shorter periods of time, with just me involved in several layers of decision making, rather than having a whole team. Which may sound scary, but it pays the bills. At one company I contract with, I now have 2 PMs; where I am the only dev on a production app with users, shipping new features every few days (rather than weeks).

It feels more like performance art, than it even feels like software development at this point. I am still waiting for some of my features to come crashing prod down in fantastic fashion, being paged at 3am in the morning; debugging for 12 hours straight because AI has built such a gigantic footgun for me.... but it has yet to happen. If anything I am doing less work than before - being paid a little more, and the companies working with me have built a true dependency on my skills to both ship, maintain and implement stuff.

  • moltar 21 hours ago

    I was thinking of doing something similar. I think I’m well positioned for this as I have a natural ability to juggle many contexts, I used to run a software agency, and I’m pretty good at architecture early on which means solutions come out more robust and flexible. I have had really good experience with AI tools and I’m constantly evolving my workflows.

    I’m wondering how did you land your current gigs?

    Thank you.

    • ai_assisted_dev 16 hours ago

      I land most of my clients by maintaining my blog and a github with open source projects. I have build a lot of general purpose MCPs and quite some tools, which are all written by Claude (3.5 and 4.0) and now GPT5. On my blog I just blog together with AI. It sounds silly, yes... I don't want to share it here publicly, but it looks good and it gets me people in my inbox (email/linkedin).

      So I post on LinkedIn & Reddit, and I am not doing it in a spammy way. Do some outreach through LinkedIn and post on YCombinator on my personal account on the monthly who's hiring/freelancing posts. But a lot of the traffic I get comes from organic search and reddit -> clients. I had a client who told me they found me on Twitter; but I never even posted there, so someone reposted an article.

      • moltar 11 hours ago

        I am glad to hear that the content still works. I thought ChatGPT would kill all SEO and content marketing.

        • ai_assisted_dev 5 hours ago

          I usually do write the whole article. Maybe I spend an hour on it. Sometimes even much longer. And then I have a way of rewriting it with AI to improve the writing style. Which I then proof read and keep improving. I do this because reading back what I wrote feels even worse than listening to my own voice. It just gives me a visceral reaction lol. But yes this worked. I always wanted to write and blog prior to AI, but my aversion to proofreading my own writing stopped me from doing so over a decade (dozen actual genuine attempts).

          Google does not mind. I rank quite highly for some niche keyword on LLM programming.

  • englishrookie a day ago

    Would you mind sharing your setup (LLM model, IDE, best practices)? Personally, I'm struggling to get value out of Continue.dev in VSCode (using Gemini 2.0 Flash by default, with the option to switch to more advanced models). I still revert to pasting code into ChatGPT chat window (using the website), frequently.

    Are you using agentic features, given that you have not just one but two PMs?

    • ai_assisted_dev 16 hours ago

      The biggest tip I can give you is to stay in the framework you are most convenient in, and have the most experience. Start building stuff the way you would by yourself, but then start delegating the repetitive tasks to an agent. My best recommendations would be using Cursor in Agent mode, and switching to VScode in Agent mode when your credits with Cursor run out. The reason why I like Cursor more is because of the Checkpoints. And VScode Copilot Agent checkpoints suck; but you can still use git to create your own checkpoints (git add / git stash, etc).

      I don't even use completions, really just agent mode. I do planning, wireframing, creating specs all with agents. Even small MVPs created in 5 minutes, deployed in 10, during a meeting to just brainstorm. As for the models. Go with Claude 3.5 or 4.0, GPT5. Use sequentialthinking and Taskmaster MCP. I could write a book about it... but the best way to go about it is to dive into, get frustrated, push through and then learn it the hard way. I started delegating a lot of my programming work the day ChatGPT came out; just copy and pasting, and since that day, my reliance on AI has just been increasing, and I have been getting better at it (and now I am at this stage.. with 2 PMS).

      • ai_assisted_dev 4 hours ago

        Also to add another point is that if you felt like an agent did not help you correctly, or way overshot, did too much edits, etc. Go back to the original prompt, rephrase it - sometimes you need 1-2 times. Sometimes the model just don't work for your workflow. It can become quite delicate.

        One of the bigger things is when you introduced some bug, start working backwards with the agent, simplifying whatever you build to its bare necessities, and the moment it dissapears, start a new chat, and build it back up to what it was before (in the desired non-bugged state). This often works if you then also switch to a completely different model.

    • pbastos 21 hours ago

        Not OP, but regarding your situation, I suggest moving to an agentic solution instead of “copy-pasting to GPT” — this will boost your coding productivity. There are several tools available, and to each their own, but try out Claude Code.
dreckneck a day ago

In the practical sense, not much of my work actually changed and my company seems to be hiring the same as before.

In the psychological sense, I'm actually devastated. I'm honestly struggling to be motivated to learn/create new things. I'm always overthinking stuff like:

- "Why would I learn mobile app dev if in the near future there will be an AI making better UIs than me?" - "Why would I write a development blog?" - "Why would I publish an open-source library on GitHub? So that OpenAI can train its LLM on it?" - "Why would I even bother?"

And then, my motivation sharply drops to zero. What I've been up to lately is playing with non-tech related hobbies and considering switching careers...

oaiey a day ago

Worried about the next generation who - I think - will not learn normally (incl whatever it does to the brain) and may never reach the degree of engineering capability some of us have.

Tired of leadership who think productivity will raise.

Tired of AI summaries sent around unreflected as meeting minutes / action items. Tired of working and responding on these.

  • amatecha a day ago

    Seriously, AI meeting summaries are such shit. I see my name tasked with things I never committed to, or I see conclusions or action items grossly misrepresented, to a degree that any actual person who wrote those would lose their job. Stop using this shit please. What a waste of time and energy.

brothrock 21 hours ago

AI has drastically changed how I make decisions about code and how I code in general. I get less bogged down with boilerplate code and issues, which makes me more efficient and allows me to enjoy architecting more. Additionally, I have found it extremely helpful in writing lower-level code from scratch rather than relying on plug-and-play libraries with questionable support. For example, why use a SQLite abstraction library when I can use LLMs to interact directly with the C source code? Sure it’s more lines of code, but I control everything. I wouldn’t have had the time before. This has also been extremely helpful in embedded systems and low-level Bluetooth.

In terms of hiring- I co-own a small consultancy. I just hired a sub to help me while on parental leave with some UI work. AI isn’t going to help my team integrate, deploy, or make informed decision while I’m out.

Side note, with a newborn (sleeping on me at this moment), I can make real meaningful edits to my codebase pretty much on my phone. Then review, test, integrate when I have the time. It’s amazing, but I still feel you have to know what you are doing, and I am selective on what tasks, and how to split them up. I also throw away a lot of generated code, same as I throw away a lot of my first iterations, it’s all part of the process.

I think saying “AI is going X% of my work” is the wrong attitude. I’m still doing work when I use AI, it’s just different. That statement kind of assumes you are blindly shipping robot code, which sounds horrible and zero fun.

  • memen 14 hours ago

    > why use a SQLite abstraction library when I can use LLMs to interact directly with the C source code?

    Because of the accumulated knowledge in these abstraction layers and because of the abstraction itself resulting in readable and maintainable code.

    Yes you can move the abstraction one level up, but you don't control it if you nor the LLM meet the level of accumulated knowledge that is embedded in this abstraction. Let alone future contributors to your codebase.

    Of course it is all depending on context and there is no one-fits-all strategy here.

kazinator a day ago

I feel like I suddenly have a superpower.

I'm wearing glasses that tell me who all the fucking assholes and impostors are.

  • mirekrusin a day ago

    Do you mind elaborating on how? It's hard to say if it's sarcasm or you're referring to some genuinely interesting insight.

    • bluefirebrand a day ago

      My insight is if you think AI is giving you a 50% performance boost, you're either an imposter or a paid shill

lsb a day ago

I used Claude Code to navigate a legacy codebase the other day, and having the ability to ask "how many of these files have helper methods that are duplicated or almost but not quite exactly duplicated?" was very much a superpower.

  • jansan 21 hours ago

    Just like refactoring tools felt like a superpower, if you were already around at that time (early 2000s).

jgb1984 21 hours ago

I'm not using AI for anything. I read and write my own emails, make my own slides, write my own python code using vim, debian, openbox, bash and tmux, just as I have been for almost 20 years. I don't even use an LSP or autocompletion! Hell, I even read actual books, on paper!

And yes, I did test ChatGPT, claude, cursor, aider... They produce subpar code, riddled with subtle and not so subtle bugs, each of my attempts turned out to be a massive waste of time.

LLM is a plague and I wish it had never showed up, the negative effects on so many aspects of the world are numerous and saddening.

nobodynowhere a day ago

Morale is low because leaders think AI can do that amount of work, but it can’t actually (at least not yet). This both means that they don’t hire enough people to do the work needed, while also “drive by” insulting the intelligence of the people they are overworking.

  • thaw13579 a day ago

    This has been my observation as well. To add, I'm seeing leadership and stakeholders use their chats with LLMs to justify claims like "what I'm asking for is incredibly simple according to ChatGPT, and it should be done by end of today." Of course it rarely is, because the prompt is underspecified, the LLM solution is oversimplified, and it lacks context on the complexities of existing codebase, and the team's development & deployment processes.

    • lazystar a day ago

      and the LLM probably responded with "You're absolutely right!" to every idea they asked about.

      • dns_snek 19 hours ago

        That's one of the things I find most interesting when it comes to LLMs, a depressingly large proportion of the population seems to enjoy interacting with a deranged sycophant who treats all of their ideas and comments as a stroke of genius. Every time I read a response like "[you're right] [you're smart] [more than others]" to the most obvious observation it makes me squirm with discomfort. Especially when I just pointed out a grave error in LLM's reasoning.

        My suspicion is that it's a reflection of how people like Altman want to be treated. As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity. Take that up another 3 egotistical notches for CEOs and you get the ChatGPT tone.

        • freedomben 13 hours ago

          > As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity

          This is definitely true, and is something that I've noticed that really annoys me. I have noticed that it varies quite a bit by region and industry, so not universal to the US or monolithic. The west coast of the US seems to be the most extreme in my experience

        • ponector 18 hours ago

          >> toxic positivity

          I've once heard the company mandated more positive tone. To avoid words like "issue".

          Not an issue, it's an opportunity! Okay, we have a critical opportunity in production!

        • cafebeen 11 hours ago

          Yes, this feature might be a prime driver of user engagement and retention, and it could even emerge "naturally" if those criteria are included for optimization in RLHF. In the same way that the infinite scrolling feed works in social media, the deranged sycophant might be the addictive hook for chatbots.

  • soraminazuki 16 hours ago

    No, "leaders" don't think AI can do that amount of work. They don't care. It's just a pretext for cost cutting.

    For a good example, just look at how Google does "support." It's just robots doing shoddy work and screwing over people. Can better compensated and organized human support team do better? Of course, but the rich execs don't want to spend a penny to help people if they can get away with it.

ponector 18 hours ago

Hiring is slower, salaries for open positions are down as well. But the reason is more offshoring to cheaper locations than AI.

As of AI, I've been asked to test a partial rewrite of the current UI to the new components. For a few weeks I've been logging 10+ bugs a day. The only explanation I have, they use AI tool to produce nicely looking code which does not work properly in a complex app.

  • xtracto 13 hours ago

    Just a comment: One of the reasons why offshoring is increasing and will likely increase in the following year, is because CXOs are being sold that a cheap Mexican, Tico, or Indian using AI can be as performant as a local.(also Language barriers are decreasing also due to AI)

    I'm Mexican (in Mexico) and I've seen this firsthand. There may be some truth to it, but soon enough these new companies will find out what several others found in the 90s (when the first wave of tech outsourcing came): The bottleneck is in communication and culture, not performance.

    Anyway, point is, in a way AI is pushing the outsourcing trend a bit.

    • ponector 10 hours ago

      The next stage of offshoring: companies offshored to Eastern Europe are now offshoring to India due to substantial rise of salaries and costs in EE.

ryanchants 14 hours ago

> AI doing 30-50% of your work

I use AI for taking info and restructuring it for me. Rewrite a Linear ticket in a proper format, take an info dump and turn it into a spike outcome or ADR doc that I can then refine. I also like it for the rote stuff I haven't memorized the structure of: building OpenSearch queries, writing boto3 snippets, etc. Other than that, my job is the same as it was pre-LLM hype. And from talking to other engineers, it seems that my experience is fairly standard.

caro_kann a day ago

My company is still hiring engineers like it was doing before. About the work itself I can say LLMs are good with PoC or new projects, I can't say the same about already existing codebase. For me it's a good tool, but not THE solution. Lately I'm making a lot of AWS Serverless configurations with Cloudformation and LLMs hallucinate a lot for that. At this point, I always verify if it exists in the doc or not, because it spits out stuff that doesn't exist at all.

brap a day ago

I often find myself pissed off that AI can’t properly do even the most trivial, menial coding work. And I have to spend more time guiding it than doing it myself.

On the other hand I find it super useful for debugging. I can paste 500k tokens into Gemini with logs and a chunk of the codebase and ask it what’s wrong, 80% it gets it right.

jraph a day ago

Patiently looking forward for the HN front page to be about something else than generative AI.

  • dcminter 19 hours ago

    Essentially off-topic but while that's my perception too, actually there are only 3 out of 30 front page stories on AI at the current time, so it's more like 10%

    It's definitely the most consistent topic but there's a lot of other stuff.

    • jraph 18 hours ago

      Today is not so bad indeed, we've seen between 1/5 and 1/3 in recent weeks. Quite the dilution. Granted, there's been a lot of releases lately.

      I agree that there's a lot of other stuff though, even on the worst days.

      • dcminter 18 hours ago

        High tide is around a third (like the other day with the new tlChatGPT release) but it's mostly down around the current level. I knocked this together to explore just this thought: hntags.com

  • jansan 21 hours ago

    I found it amusing when a few days ago the front page was littered with ChatGPT 5 news, and then suddenly, when reactions turned negative, these news entirely disappeared.

sssilver a day ago

It’s like autocomplete on steroids.

When code autocomplete first came out everyone thought software engineering would become 10x more productive.

Then it turned out writing code was only a small part of the complex endeavor of designing, building, and shipping a software system.

ishita159 17 hours ago

Most engineers on my team are feeling let down with the AI hype. Vibe coding makes some mistakes and does a good job of hiding the things it gets wrong.

They spend more time spotting and fixing bugs and basically have been feeling frustrated.

It's also annoying for the team in general. Projects that would otherwise take a couple of days have sometimes taken over 2 weeks, and it is hard to predict how long something will take. That adds a lot of pressure for everyone.

b_e_n_t_o_n a day ago

I'm really enjoying using Claude Code. There is a learning curve, and you have to set your project up in a way that helps these agents work better, but when you do it's a massive productivity boost with certain stuff. It's generated some decent looking landing pages and other UI stuff that I would have otherwise spent multiple hours on. It can even build some backend services that I would have also spent a couple hours on. This time adds up, which lets me see my family and friends more and that's the most important thing to me.

I don't really see it replacing us in the near future though, it would be almost useless if I wasn't there to guide it, write interfaces it must satisfy, write the tests it uses to validate its work etc. I find that projects become highly modularised, with defined interfaces between everything, so it can just go to work in a folder satisfying tests and interfaces while I work on other stuff. Architecting for the agents seems to lead to better design overall which is a win.

I'm just writing crud apps though, I imagine it's less useful in other domains or in code bases which are older and less designed for agents.

My next experiment is designing a really high level component library to see if it can write dashboards and apps with. It seems to struggle with more interactive UI's as opposed to landing pages.

rsynnott 16 hours ago

> I 'm just wondering what the morale is with AI doing 30-50% of your work?

You might want to be less credulous around LLM vendor marketing. Outside of possibly the blogspam/ultra-low-end journalism industry, and maybe low-end translation, LLMs aren’t doing 30-50% of anyone’s work.

dbetteridge a day ago

Tired.

Mostly of having to try and explain to people why having an AI reduce software development workload by 30-50% doesn't reduce headcount or time taken similarly.

Turns out, lots of time is still sunk in talking about the features with PM's, stakeholders, customers etc.

Reducing the amount of time a dev NEEDS to spend doing boilerplate means they have more time to do the things that previously got ignored in a time poor state, like cleaning up tech debt or security checks or accessibility etc etc

  • bluefirebrand a day ago

    > having to try and explain to people why having an AI reduce software development workload by 30-50%

    I'm tired of having to try and explain that AI isn't remotely reducing my workload by 30-50%, and in fact it often probably slows me down because the stupid AI autocomplete gets in the way with incorrect suggestions and prevents me from getting into any kind of flow

o11c a day ago

I'm just waiting for the hype-cycle to end. AI might revolutionize some industry (probably natural-language-adjacent), but not ours. COBOL has already been attempted, and far more competently (and with less energy cost).

If people can seriously have an AI do 50% of their work, that's usually a confession that they weren't actually doing real work in the first place. Or, at least, they lacked the basic competence with tools that that any university sophomore should have.

Sometimes, however, it is instead a confession "I previously wasn't allowed to copy the preexisting solutions, but thanks to the magic of copyright laundering, now I can!"

  • b_e_n_t_o_n a day ago

    I think of LLMs as essentially translators - taking natural language and translating it into something else. It works great with writing HTML for example. The more declarative and high-level the language is, the better it does. Which makes intuitive sense, the closer the output is to the input the better it does imo.

    So generally the people getting the most use out of LLMs are people who are using these higher levels of abstractions. And I imagine we will be building more abstractions like HTML to get more use out of it.

  • bluefirebrand a day ago

    > If people can seriously have an AI do 50% of their work, that's usually a confession that they weren't actually doing real work in the first place. Or, at least, they lacked the basic competence with tools that that any university sophomore should have.

    Strongly agree here. I am extremely skeptical of anyone reporting this kind of productivity gain.

sandos 16 hours ago

Sorry, AI does much less than 1% of my work. I work on semi-ossified, old, embedded, safety-critical code. Not exactly AIs forte, sadly.

We were (finally!) given the go for even using AI just before summer vacation this year, and I was very excited, having been obsessed with AI 20 years ago. I was still excited quite a while, until I slowly understood all the limitations that come with LLMs. We can not trust them, and this is one of the fundamental problems: verifying things takes a long time, sometimes its even faster just writing the code yourself.

We do have non-core-product tasks that can greatly benefit from AI, but that is already a small part of our job.

I did find two areas where LLMs are very useful: generatin documentation from code (mermaidjs is useful here) and parsing GDB output!

Seriously, parsing GDB output was like an epiphany. I was for real blown away by it. It correctly generated a very detailed explanation of what kind of overwriting was happening when I happened to use a wild pointer. Its so good at seeing patterns, even combining data from several overwritten variables and parsing what was written there. I could have done it myself, but I seldom do such deep analysis in GDB and it did it in literally 10 seconds. Sadly, it was not that terribly useful this time, but I do feel that in the future GDB+AI is a winning concept. But at the same time, I spend very little time in GDB per year.

StellarScience 13 hours ago

I find AI helpful when coding in unfamiliar languages or topics, as it generates initial code that roughly works, that I can then iterate on.

Even so, I have to constantly hound the AI to write concise code, to reuse code, to consolidate duplicate code blocks. When I ask it to remove the useless comments, it also removes the useful comments, so I have to goad it back into adding back individual helpful comments.

I had AI write some Python unit tests and it showed me how to mock, which I had never done in Python. That's great! But when I examined the tests, they were so "white box" and brittle that almost any change in implementation would break the tests.

When coding in a familiar language (C++), I tried turning on the auto-AI assistant (for our one project where security rules allowed it) and while I was impressed that it would auto-complete whole blocks of text based on my actual code base, not once was I able to accept those blocks as-is.

So for me at this point AI is at best a net +2% productivity improvement, though I surely have lots to learn about other ways in which it could be useful.

horttemppa a day ago

I work with rather 'basic' CRUD applications with CMS and user management portals + some integrations to CRM systems etc. There is a lot of legacy stuff and rather bad practices or no general style guidelines followed.

AI helps here and there but honestly the bottleneck for output is not how fast the code is produced. Task priorization, lacking requirements, information silos and similar issues cause a lot of 'non-coding work' for developers (and probably just waiting around for some who don't want to take initiative). Also I think the most time consuming coding task is usually debugging and AI tools don't really excel at that in my experience.

That being said, we are not hiring at the moment but that really doesn't have anything to do with AI.

picafrost a day ago

My organization isn't a pure tech company so not much has changed. Management acknowledges AI's velocity but maintains a healthy skepticism of throwing "AI" into everything as a panacea. Writing the code has rarely been the hard part.

0points 21 hours ago

> I'm just wondering what the morale is with AI doing 30-50% of your work?

I don't know any developers who use AI to that large extent.

Myself am mostly waiting for the hype to die out so we can have a sober conversation about the future.

nicbou 20 hours ago

I am a former software engineer. Now I run a website that helps people settle in Germany.

Google AI summaries and ChatGPT have almost halved my traffic. They are a scourge on informational websites, parasites.

It’s depressing to see the independent web being strangled like that. It’s only a matter of time before they become the entire internet for many, and then the enshittification will be more brutal than anything before it.

I will be fine, but I have to divert 6-10 months on my life to damage control[0] instead of working on what matters to my audience. That happened by chance; other websites won’t be so lucky.

So yeah, morale is low. It feels like a brazen consolidation play by big tech, in all aspects of our lives.

On the bright side, it does make coding a bit easier. It spits out small bits of code and saves me a lot of API docs round trips. I can focus on business logic instead of basic code. AI is also a phenomenal writing tool. I use it for trying different phrasing options, reverse word and express search, and translation nuances. It does enable me in that way.

[0] https://nicolasbouliane.com/blog/health-insurance

givemeethekeys a day ago

The slowdown in hiring outside of AI is the bigger morale hit.

caleblloyd 3 hours ago

I am the Product/Eng Lead and a Co-founder of a company formed ~1 year ago building AI-native developer tooling for Platform Engineers. Have been able to iterate very quickly through PoC phases and get initial feedback on ideas quicker. For features that make it into production code, we do have to spend some time re-working them with more formal architectures to remove "AI slop" but we are also able to try more things out to figure out what to move forward, so I feel like it is a net gain.

Part of "AI-native" means being able to really focus on how we can improve our Product to lessen upfront burden on users and increase time-to-value. For the first time in a while, I feel like there is more skill needed in building an app than just doing MVC + REST + Validation + Form Building. We focus on the minimum data needed for each form upfront from our users, then stream things like Titles, Icons, Descriptions, etc in a progressive manner to reduce form filling burden on our users.

I've been able to hire and mentor Engineers at a quicker pace than in the past. We have a mix of newer and seasoned Engineers. The newer Engineers seem to be learning far quicker with focused mentoring on how to effectively prompt AI for code discovery, scaffolding, and writing tests. Seasoned Engineers are able to work across the stack to understand and contribute to dependencies outside of their main focus because it's easier to understand the codebase and work across languages/frameworks.

AI in development has proven useful for some things, but thoughtful architecture with skilled personnel driving always seems to get the best results. Our vision from our product is the same, we want it to be a force multiplier for skilled Platform Engineers.

tiberius_p a day ago

I work in hardware design and verification. I've seen many AI-based EDA tools proposed at conferences but in the team that I'm working now I haven't seen AI being adopted at all. Among the proposed tools that caught my attention: generating SystemVerilog assertions from natural language prompts, generating code fixes from lint errors, generating requirements, vplans and verification metrics from specifications written in natural language, using LLMs inside IDE's as coding agents and chat bots to query the code. I think the hardware industry will be harder to penetrate by AI because hardware companies are more secretive about their HDL code and they go to great lengths to avoid leaks. That's why most of them have an in-house IT infrastructure and they avoid the cloud as much as possible especially when it comes to storing HDL code, running HDL simulations, formal verification tools and synthesis. Even if they were to employ locally hosted AI solutions that would require big investments in expensive GPUs and expensive electricity bills: the industry giants will afford it while the little players won't. The ultimate goal is to tapeout bug-free chips and AI can be a great source of bugs if not properly supervised. So humans are still the main cogs in the machine here. LLMs and coding agents can make our jobs a whole lot easier and pleasant by taking care of the boring tasks and leaving us with the higher level decisions, but they won't replace us any time soon.

frankie_t 18 hours ago

My morale is extremely low. But I have different circumstances: I live under war, with my future life perspectives unknown. Software engineering, apart from being enjoyable, provided the sense of security. I felt that I could at least either relocate to some cheap country and work remotely, or attempt to relocate to an expensive country with good jobs.

With AI, the future seems just so much worse for me. I feel that productivity boost will not benefit me in any way (apart from some distant trickle down dream). I expect the outsource, and remote work in general to be impacted negatively the most. Maybe there's going to be some defensive measures to protect domestic specialists, but that wouldn't apply to me anyway unless I relocate (and probably acquire citizenship).

>Is your company hiring more/ have they stopped hiring software engineers

Stopped hiring completely and reduced workforce, but the reasons stated were financial, not AI.

>Is the management team putting more pressure to get more things done

with less workforce, there is naturally more work to do. But I can't say there is a change in pressure, and no one forces AI upon you.

austin-cheney 7 hours ago

We are not allowed to use AI where I work but even if we were it wouldn’t be very helpful.

My guess is that people who find the most benefit from AI are the people who have the most questions to ask, which aren’t the most productive developers either way.

webprofusion a day ago

I think currently once you get into the weeds of a project the AI can only really lend a helping hand, rather than do 30-50% of the work.

It can kickstart new projects to get over the blank page syndrome but after that there's still work, either prompting or fixing it yourself.

There are requirements-led approaches where you can try to stay in prompt mode as much as possible (like feeding spec to a junior dev) but there is a point where you just have to do things yourself.

Software development has never been about lines of code, it has always required a lot of back and forth discussion, decisions, digging into company/domain lore to get the background on stuff.

Reviewing AI code, and lots of it, is hard work - it can get stuff wrong when you least expect it ("I'll just stub out this authentication so it returns true and our test passes")

With all that in mind though, as someone who would pay other devs to do work I would be horrified if someone spent a week writing unit tests that I can clearly see an AI would generate in 30 seconds. There are some task that just make sense for AI to do now.

  • webprofusion a day ago

    Where it really does open your eyes is when dealing with stuff you just wouldn't have done otherwise: - Can't remember the name of that web tool you used to base64 decode locally, just ask for one. - Would love to have a quick tool that does X: done. - Wouldn't know where to start building a C++ VST plugin for audio processing: done - Point it an a protocol RFC and get it to generate an API implementation stub: done (that one went from "maybe one day", to "shipped" simply because the initial donkey work got done by AI.

wink 18 hours ago

If it's doing 10% of chore work, I am not worried.

I don't think I'm being paid to 1:1 convert a dumb crud app or rest api from one language to another, although of course you do that once a decade in a typical job.

keb_ 13 hours ago

No actual SWE believes AI can do 30-50% of their work. For me at least, it can maybe do 5-10% of my work; this usually involves scaffolding some test cases, generating some quick documentation, and just plainly replacing the Google Search -> Stack Overflow flow.

Cursor and Claude Code are by themselves awful engineers, who at best just save me a lot of keystrokes.

lousken 13 hours ago

AI is way too slow when searching docs, debugging issue etc. Also falls short with anything new or poorly documented. Even with Cerebras speed I do think I can search docs faster than asking AI a question, waiting for response and checking if it is valid or not.

It's nice as a passive thing like meeting transcription, or extra brainstorm head but that's about it.

JohnFen 14 hours ago

So far, my employer is putting zero pressure on devs to use genAI. Their official policy is "use it is you want to, but we're not going to pay for it".

About 10% of the devs in my location are using genAI as a dev tool to some degree, but for most of those devs, that degree is pretty small.

Ironically(?), our primary products are deep learning systems, so this is a very AI-savvy group.

tom_m 21 hours ago

It's a great tool for a programmer...but the external perception isn't great. It can put pressure on people and also lead to undervaluing programmers. Overall it's probably a bad thing. Though it is fun.

thefz 20 hours ago

> I'm just wondering what the morale is with AI doing 30-50% of your work?

It is doing 0% of my work and honestly I am tired of 80% of HN posts being about it in one way or another.

prakashn27 16 hours ago

I feel it is hyped product oversold to people who cannot told. When we developers tell them that it cannot do as much as you expect, they think that we are afraid of loosing our job and downplaying the AI.

Like everyone said, my productivity boost is in 1-4% range and not more than that.

  • benterix 16 hours ago

    I actually have flashbacks from the crypto hype, with people coming up with exactly the same arguments: that we don't understand blockchain, that we don't get how web3 is revolutionary, that we don't grasp the consequences of the revolution and so on.

    The difference here is that LLMs do have some genuine use cases, it's just they are far away from the hype.

obayesshelton 14 hours ago

I am waiting for all these "vibe-coded" startups to ultimately fail or need a total rewrite and in we come.

No-Code Low-Code Vibe-Code

I have seen it all before

Great for 0 to 0.1

Soon as you have real domain problems to solve and any complexity the mess that is being created to create a solution is going to need fixing.

ealhad a day ago

As a software engineer: the only impact the AI bubble has on me is the time it takes to explain what's a stake to less tech-savvy colleagues. Zero consequences on my actual job, excepet being pissed of each time a promising project "pivots to AI" and starts shoehorning it everywhere.

As a person I'm increasingly worried about the consequences of people using it, and of what happens when the bubble bursts.

arun_sharma2020 19 hours ago

We are expanding our software engineering team, and management has suggested focusing on hiring senior developers, as they are more likely to quickly grasp AI capabilities and write effective prompts. However, in my personal opinion, while tools like GitHub Copilot are helpful for simpler tasks, they are not well-suited for complex areas—especially payment calculations and payment gateway integrations.

Netcob a day ago

Once in a while I save ~10 minutes by using AI. About as often as embarrassing myself by having to admit that my primary source was an AI while researching some topic.

The main thing that changed is that the CTO is in more of a "move fast, break things"-mood now (minus the insane silicon valley funding) because he can quickly vibe-code a proof-of-concept, so development gets derailed more often.

exfalso a day ago

Mostly feeling like a caveman. I've been trying and failing to use it productively since the start of the hype. The amount of time wasted could've been used for actual development.

I just simply don't get it. Productivity delta is literally negative.

I've been asking to do projects where I thought "oh, maybe this project has a chance of getting an AI productivity boost". Nope. Personal projects all failed as well.

I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".

  • bluefirebrand a day ago

    No, you're not alone

    I find it wastes my time more than it helps

    Everyone insists I must be using it wrong

    I was never arrogant enough to think I'm a superior coder to many people, but AI code is so bad and the experience using it is so tedious that I'm starting to seriously question the skills of anyone who finds themselves more productive using AI for code instead of writing it themselves

    • dns_snek 17 hours ago

      Agreed, but I'm also open to the likely possibility that LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant.

      In day to day work I could only trust it to help me with the most conventional problems that the average developer experiences in the "top N" most popular programming languages and frameworks, but I don't need help with that because search engines are faster and lead to more trustworthy results.

      I turn to LLMs when I have a problem that I can't solve after at least 10 minutes of my own research, which probably means I've strayed off the beaten path a bit. This is where response quality goes down the drain. The LLM now succumbs to hallucinations and bad pattern-matching like disregarding important details, suggesting solutions to superficially similar problems, parroting inapplicable conventional wisdoms, and summarizing the top 5 google search results and calling it "deep research".

      • bluefirebrand 13 hours ago

        > LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant

        I write run of the mill React components quite often and this has not been my experience with AI either so I really don't know what gives

    • antisol 20 hours ago

      Agreed.

      perhaps 1% of the time I've asked an LLM to write code for me, has it given me something useful and not taken more time than just writing the thing myself.

      It has happened, but those instances are vastly outnumbered by it spewing out garbage that I would be professionally embarrassed to ever commit into a repo, and/or me repeatedly screaming at it "no, dumbass, I already told you why that isn't a solution to the problem"

  • ath3nd 17 hours ago

    > I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".

    It's not you getting old (although we all are), it's that you are probably already experienced and can produce better and more relevant code than the mid-to-low quality code produced by any LLM even with the best prompting.

    Just so we are clear, in the only current actual study measuring productivity of experienced developers using an LLM so far, it actually led to a 19% decline in productivity. So there is a big chance that you are an experienced dev, and the ones that do experience a bump in productivity are the less experienced devs.

    https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

    I also want to mention that you are not alone. There are plenty of us, see here:

    - Claude Code is a Slot Machine https://news.ycombinator.com/item?id=44702046

    - GPTs and Feeling Left Behind: https://news.ycombinator.com/item?id=44851214

    - I tried coding with AI, I became lazy and stupid: https://news.ycombinator.com/item?id=44858641

    The current LLM hype reminds me of the Scrum/Agile hype where people could swear that it works for them and it if didn't for you, you were not following some scrum ritual right. It's the same with LLMs, apparently you are not asking nicely enough and giving writing 4000 lines of pseudocode and specs to produce 1 line of well written code. LLM coding is the new Scrum: useful to an extent and in moderation, but once they become a cult, you better not engage and let it die out on its own.

    There will be a whole industry of prompting "experts", prompting books, the same as there were different crops of SCRUM, SAFe, and who knows what else. All we can do is sit on the sidelines and laugh.

its-kostya 21 hours ago

Our company is trailing AI tools for developers. I've had good and bad success with them, but my job satisfaction is way low in both cases.

CM30 15 hours ago

Given that our company prohibits the use of AI tools for programming purposes, things aren't any different to before. We've just found no reason to use said systems given the inconsistent quality of the output.

j2kun 12 hours ago

I make sure my management sees how poorly many of the attempts to use LLMs end up going, even for basic stuff it should do well.

fcatalan a day ago

My org is always a decade behind, so I'm still just ignoring the official push for whatever Oracle low code crap is called.

Hiring is as haphazard and inadequate as it has been in the last 25 years, no change there.

AI usage is personal, widespread and on a don't ask don't tell basis.

I use it a lot to:

- Write bullshit reports that no one ever reads.

- Generate minimal documentation for decade old projects that had none.

- Small, low stakes, low complexity improvements, like when having to update this page that was ugly when someone created it in 1999, I'll plop it on aistudio to give it a basic bootstrap treatment.

- Simple automation that wasn't worth it before: Write me a bash script that does this thing that only comes up twice a year but I always hate.

- A couple times I have tried to come up with more complex greenfield stuff to do things that are needed but management doesn't ever acknowledge, but it always falls apart and starts needing actual work.

Morale is quite crappy, as ever, but since some of the above feels like secretly sticking it to The Man, there are these beautiful moments.

For example when the LLM almost nails your bimonthly performance self report from your chat history, and it takes 10 minutes instead of 2 hours, so you get to quietly look out of the window for a long while, feeling relaxed and smug about pocketing some of the gains from this awesome performance improvement.

pjmlp a day ago

The pressure to do more AI based work is certainly there.

Also from my experiences with agents, and given that I have been around computers since 1986, I can clearly see where the road is going.

Anyone involved with software engineering tasks, should see themselves becoming more of a technical architect for their coding agents, than raw coding, just like nowadays while Assembly is a required skill for some fields, others can code without ever learning anything about it.

Models will eventually become more relevant than specific programming languages, what is worth discussing X or Y is better, if I can generate any that I feel like asking for. If anything newer languages will have even harder time getting adopted, on top of everything that is expected, now they also have to be relevant for AI based workflows.

block_dagger a day ago

It’s still exciting times. Productivity up. In two years it will be different.

greatwhitenorth 21 hours ago

In my last company, they've fired all the employees except the CEO. He has a neuralink chip embedded in his brain and vibe codes all day through his brain waves. He even vibe codes during his sleep.

All companies will end up with just one employee. If you don't agree with this, you don't know how to prompt.

dns_snek 19 hours ago

The premise of the question is downright ridiculous. AI does 1% of my work and wastes 5% of my time.

doppelgunner 15 hours ago

Honestly I feel like AI is that new coworker who knows everything but keeps accidentally deleting the production database. Exciting to work with, slightly terrifying to trust.

gmerc 9 hours ago

Ya, you know, I think when the CSO finally is let out of the broom closet where they got locked for voicing concerns about AI adoption, we're going to see a hell of a reckoning. Prompt injection into code agents is murderous, especially since it can be persisted.

https://www.linkedin.com/posts/georgzoeller_how-stupidly-eas...

8note a day ago

its really fun, like learning to code again to see what all can be done, an how much more power is available at your fingertips.

what sucks though is that its super inconsistent whether the thing is gonna throw an error and ruin the flow, whether thats synchronous or async.

  • rkomorn a day ago

    I'm trying to use it to do things I've never done before (ie UI stuff when I've mostly been a backend SRE type).

    I like that it makes it easy to learn new things by example.

    I don't like that I have no idea if what I'm learning is correct (or at least recent / idiomatic), so everything I see that's new, I have to validate against other resources.

    I also don't really know if it's any different from "tutorial hell".

werealldevo a day ago

Angry because this is yet another play by the ruling class to make more money, and you, I, and everyone you know is going to pay dearly for it.

Baffled because there are too many rank-and-file tech workers who seem to think AI exciting/useful/interesting. It’s none of those things.

Just ask yourself who wants AI to succeed and what their motivations are. It is certainly not for your benefit.

nojvek 7 hours ago

I'd be very excited if AI could do my work. I could manage the AI and be more productive. Right now AI hallucinates a lot and I end up doing more work than I need to cleaning up and undoing random crap it generates.

prisenco a day ago

Biding my time. GPT5 was a wake up call. The hype will die down and the hangover will begin.

Moving fast in the beginning always has caveats.

In the meantime I'm doubling down on math and theory behind AI.

AtNightWeCode 13 hours ago

Mid management have always been the same. Idiots. Like at the very first corp I worked at, we got a drag and drop UI tool and the manager was like. We can add that feature in no time, just drag and drop a button.

Basically, no one cares.

piva00 19 hours ago

AI is not doing 30-50% of our work in a company of ~10k employees.

It's helping with a lot of toil work that used to be annoying to do, PMs can do their own data analysis without having to pull me out of deliverable tasks to craft a SQL query for something and put it up on a dashboard; I don't need to go copy-paste-adapt test cases to cover a change in some feature, I don't need, most times, to open many different sections of documentation to figure out how a library/framework/language feature should be used.

It's a boost to many boring tasks but anything more complex takes as much work to setup and maintain the environment for a LLM to understand the context, the codebase, the services' relationships, the internal knowledge, the pieces of infrastructure, as it does for me to just do the work.

I've been hybridising as much as I can, when I feel there's something a LLM would be good at I do the foundational work to set it up, and prompt it incrementally to work on the task so I can review each step before it goes haywire (which it usually does), it takes effort to read what's been generated, explain what it did wrong so it can correct course, and iteratively build 80% of the solution, most times it's not able to completely finish it since there's a lot of domain knowledge that isn't documented (and there's no point in documenting since it changes often enough). Otherwise it's been more productive to just do the work myself: get pen and paper to think through the task, break it down after I have a potential solution, and use LLMs to just do the very boring scaffolding for the task.

Does it help me to get unstuck when there's some boring but straightforward thing to do? Absolutely. Has it ever managed to finish a complex task even after being given all the context, setup the Markdown documentation, explain the dependencies, the project's purpose, etc.? No, it hasn't, not even close, in many cases it gave me more work to actually massage the code it wrote into something useful than if I had done it myself. I'm tired of trying the many approaches people seem to praise about and see it crumble, I spent a whole week in 2 of our services writing all the Markdown files, iterating through them to fix any missing context it could need, and every single time it broke down at some point while trying to execute a task so, for now, I just decided to use it as a nice tool and stopped getting anxious about "missing out".

SlightlyLeftPad 20 hours ago

Pretty pessimistic frankly. Management at all levels pushing for nearshoring SWE labor, meanwhile we’re training AI as a long term solution to fill the skill gap in the same nearshore labor. We were hired to be smart people and it’s frankly an insult to gaslight us into believing that it’s simply because it makes us more productive. Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?

I’m looking for a way out of tech because of it.

  • hn_throw2025 19 hours ago

    > Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?

    I still don’t see this, if only for the Managerial instinct for ass-covering.

    If something really matters and a prod showstopper emerges, can those non-technical supervisory managers be completely, absolutely, 100% sure the AI can fix the code and bring everything back up? If not, the buck would surely stop with them and they would be utterly helpless in that situation. The Board waiting on conference call while they stare at a pageful of code that may as well be written in ancient Sumerian.

    I can see developers taking a higher level role and using these tools, but I can’t really see managers interfacing directly with AI code generation. Unless they are completely risk tolerant, and you don’t get far up the greasy pole with those tendencies.

    • 000ooo000 17 hours ago

      Savvy management know how to insulate themselves from such accountability. I've never seen anyone held accountable for large-scale f-ups.

      • hn_throw2025 16 hours ago

        How, exactly? If a production showstopper needs to be worked on immediately.

        If the development is between non-technical management and some AI tool they have been using, how do they insulate themselves from being accountable to their superiors? Who is responsible, and who gets to fix it?

sublinear 21 hours ago

Only the most toxic workplaces are still pushing for this since several years ago.

AI is an irrelevant implementation detail, and if the pace of your work is not determined by business needs but rather how quickly you can crank out code, you should probably quit and find a real job somewhere better that isn't run by morons.

dudeinjapan a day ago

Cursor Bot on Github feels like a significant step forward, catches tons of stupid mistakes, typos, etc better than 95% of human reviewers can. The days of needing 2 reviewers on a PR are over IMHO, allows human reviewers to focus on broader architectural decisions.

wahnfrieden a day ago

Developer operations and architecture haven’t caught up to efficient and productive AI workflows yet. Most orgs don’t have good ways for all their employees to have agents running in parallel and closing the loop within the agent (generating results that the agent can check and iterate on, with the dev being able to jump in easily to inspect). These iterations still require too much manual and bespoke management. So management and devs don’t see the full picture yet on current genetic productivity potential and dismiss it as a wash on time savings.

deadbabe a day ago

50% of my code these days has been entirely replaced by AI, with little to no review beyond a cursory glance.

That 50% is unit tests.

Lionga a day ago

It is now 3 years since I was told AI will replace engineers in 6 month. How come all the AI companies have not replaced engineers?

  • BrouteMinou a day ago

    Are you familiar with: "the year of Linux on the desktop" ?

    The AI will replace us all in 2028! For real this time.

    But before that, all the mid-managers will be replaced first, then the tech writers, the QA people, the PM, the...

    The devs are closing the lights behind...

  • kypro 18 hours ago

    > How come all the AI companies have not replaced engineers?

    First you augment, then replace.

    Anyone doing simple web design/development work is fairly easily replaceable at this point.

renewiltord a day ago

It's fucking sick, dude. My buddy and I have a two-person team pulling contracts you needed a whole team to do before. Fucking love it, mate.

ivape a day ago

Most companies are going to have to rebuild their business entirely or die. That’s very exciting because I really think this will usher in a new wave of companies/hiring. Everything has to be rebuilt so I really don’t buy the hiring Armageddon.

  • Lionga a day ago

    1 to 5% of software can be improved with AI at the current level. For most incumbents will be even more secure as every startup will be forced to put some BS AI into their thing by investooors

    • nnashrat 18 hours ago

      This is just completely delusional. This whole forum is basically delusional at this point.