My company decided to fire all junior developers because "With AI seniors don't need them anymore", honestly I'm on the verge of quitting, deciding to cut devs on a software company while keeping agile coaches and product owners that basically hinder development has to be one of the dumbest decisions I have seen.
Part of what makes this community great is that people share their personal experiences. It helps the rest of us understand what's going on in our industry, because most of us can't at several different places at once.
Snake eats itself too as doing so closed the pipeline to get more of their necessary seniors. Never mind that the people who made that call will be retired when the talent issue rears its head in 10-15 years.
Yeah, that sounds like a "get out if you can" moment tbh. Like, the best case is that you take them on their word, in which case the company is merely extremely poorly run, but realistically it's more likely to be cover for "shit, we are running out of money", particularly if it's venture-funded.
Their stated rationale is obviously BS. They’re just trying to extract more from fewer workers by intensifying your working conditions. That’s why the managers are retained but not the juniors. Sure seems like they view you guys as adversaries not partners.
Just anecdotally, I recognize and dread working with AI code nowadays. In the past when I see something stupid being done, my thought was "I wonder if there's a good reason they did it this way" but now it's increasingly "I wonder what other AI written problems are in this module".
Being forced to use AI would be a nightmare, as it means I would have to turn that same distrust onto my own code. It's one thing to use AI to basically just customize a scaffold like people do when using it to bootstrap some Next.js website or something. It's a completely different matter to have it writing code in huge and data driven existing codebases.
We get to check a box on what AI we use when we close a ticket. I used to select "none" because most of the time that was the case, sometimes I would pick one if I used it to build some scaffolding or explore some performance issue.
But then we started having AI demos with the CTO where the presenters would say things like "I don't know how to code in python but now I don't need to! yay!" and the C level people would be very excited about this. That's when I realized that these poor developers who just want to brown nose and show off to big cheese are instead making an argument for their own demise.
Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now...
> Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now
This sort of thing is what I'm most worried about with programmers who are very bullish on AI.
I don't think it's too controversial to say that most developers are much worse at reviewing code than they are at writing code
AI generated code is a code review exercise. If you accept my premise that most devs are worse at reviewing code than writing it, this should ring some alarm bells
You're spot on about developers being worse at reviewing code. With generated code, you still need to understand it if you want to maintain it.
I had another person send me some AI generated code that was close to 90% working. It was missing something simple (something like appending to an array instead of overwriting it...) The original developer could not understand or debug it. I'm afraid of the crap we're going to see over the next few years.
There are multiple factors exacerbating this even further:
1) AI generated code is "soulless". Reviewing code can be engaging if you are trying to understand your colleagues thought process. Sometimes you learn something new, sometimes you can feel superior, or politely disagree. AI ain't worth the hassle.
2) You unlearn how to write code by relying heavily on LLMs. On some level, it is similar to an IDE giving you context clues and definitions. On another, it can replace the hard work of thinking things through with mediocre yet usually workable solutions. And that's the big trap.
I think I'm somewhat decent at code reviewing, what I'm seeing currently is though that the PRs become larger and larger. Because AI just rewrites everything, even if you tell it to fix things. This leads to devs pushing more and bigger code changes. I'm convinced no company has enough senior developers to review these vast amounts of sloppy code.
You could make an analogy to the engineer babysitting a self-driving car during testing: you'd need to be a better driver than most to recognize when the machine is about to make a mistake and intervene.
This is a great point. This mocked test, easy to literally click a button, look like you did something good, but potentially add massive negative value when the code fails but the tests all pass. It can be a very helpful tool, but you need to understand the code it produces, and it seems like people are missing that point.
I think a lot of confusion and frustration about this is the assumption that all programming is the same thing.
I have seen areas with just tons of boilerplate, straight forward UI stuff, basic test skeletons, etc that I guess could work with AI.
But personally, in 30 years I’ve just never done much of that kind of work. Probably because I find it boring. I go look for hard or novel problems, or data structures and APIs that are causing problems (overgrown and hard to use, buggy, etc). A lot of the time is just figuring out what the authors expected the code to do, and anticipating what we will need for the next few years. I don’t see AI helping much with that.
Not that you're wrong, but here's the argument from the other side:
- AI is supposed to be a great productivity booster, but no one has figured out a way of actually measuring developer productivity, so we'll just use the proxy of measuring whether they're using AI.
- Developers won't want to use AI because it's threatening to them, so we'll mandate it.
- Consistency and fungibility of human resources is incredibly valuable. 0.5x developers can use AI to get them up to at least 1x. And 10x developers? We're actually better off if the AI "slows them down" and makes their work output look more like everyone else's. 10x developers are actually a liability, because it can be hard to quantify and measure their contributions, and so when things become worse when they end up leaving, it feels intangible and impossible to fix.
If you have lazy devs, sometimes you need to convince (or coerce, or force) them to get them out of their comfort zone and try something new. But typically this is done because management wants more while giving nothing in return which is why no one tolerates these suggestions.
The other reason, which has been used to prevent devs from fixing bad code, is that making money is more important and sometimes that means moving away from costlier tools, or choosing tools because of specific business arrangements.
An extreme example of your position would be a shop where everyone uses a different language. That's obviously untenable.
A more reasonable example might be choice of editor. I'm of the opinion that management should provide everyone a standard editor, then not mandate it's use. This means anyone can hop onto any device and know how to use it, even if the device owner has a super-customized vim setup. Folks who become familiar with the standard editor are also better positioned to help their colleagues troubleshoot their setups.
I don't see how this applies to AI assistants much if at all. It feels like mandating devs run all their ideas by a particular intern. If said intern isn't an architect then there's no value in having a single point of contact.
That justifies using common tools and standards, but not why management in particular should be doing the selection.
Everywhere mature that I've worked, management delegated the task of choosing common tools to the senior / staff engineers, as well as decisions around when to make changes or exceptions.
The places that didn't do this were engineer-founded, or were dysfunctional enough that managers could splurge on massive contracts and then force people to use the paid-for tools to justify the contract post-hoc. Every one of these were examples of what not to do.
/can/ is doing some heavy lifting here. Management is just as capable of forcing tools that look good in 15 minute demos but suck for power users on developers.
I mean, ideally, management should not be telling me anything and should let me do my job in splendid, peaceful isolation. In practice, however, management tells me how to do all sorts of things, all the time, and they're responsible for me continuing to get paid, so I have to listen.
You're answering a question that wasn't asked so you can bring your view on unionization into the conversation. The implicit question is whether management should, not whether they can.
It's about power to affect change in an organization. The more power you have the better systems you can establish and sustain. Unions should be brought up constantly in these threads about AI mandates and "leadership" directions.
Unions might not be the best solution, or the most practical. I'm all ears for better ways to fight back against bogus leadership. How else can software developers advocate for their interests in an environment where their power is sharply declining (non performance based layoffs, "performance" based layoffs, reduction in junior hiring, reduction in in-office perks, reduction in total comp, etc)?
NAFTA wasn't inevitable. Globalization has largely been about (some) people making money as we all race to the bottom, and that has meant a steady supply of desperate people willing to sell their labor for less than the current crop of workers.
Arguably one motivation for Trump's tariffs is to onshore jobs. It's an incredibly brutish and stupid way to go about it, though. A much better way would have been to oppose NAFTA and similar global a marketplace policies, as Bernie Sanders has done for decades now (and has largely been in a pretty lonely position).
Hey now but then we couldn't have all the crypto scams and increasingly complex and insecure software layers at bottom dollar! Forgot to mention all the spyware! How are the executives supposed to get their illegal kickbacks? Won't somebody think of the nepobabies!?
"When AI is able to deliver coding tasks based on a prompt, there won’t be enough copies of the Mythical Man Month to dissuade business folks from trying to accelerate road maps and product strategies by provisioning fleets of AI coding agents."
LLMs require comparatively little training to use, and in fact training (like how to optimize prompts etc.) is probably a waste of time because the model behavior and interfaces change so frequently.
This puts them in the class of tools where the benefit is obvious once there is actually a real benefit (and not, say, the foreshadowing of some future benefit). This sort of tool doesn't require a mandate to gain adoption.
So, in the subdomains of software development where the LLMs are already useful, developers will naturally pick them up, assuming the business has secured the policy support and funding. In the areas where they aren't useful, businesses should trust developers and not waste their time with mandates.
And I say this as someone who uses LLMs all day every day for Python, Go, Bash, C/C++, pretty much everything. But as an active user I see the limitations constantly and wouldn't mandate their use upon anyone.
prediction: Prompt engineering will get so complicated. There will be a movement towards somehow telling the computer exactly what to do using a logical language of some sort.
It’s about the “We can’t allow a mineshaft gap!” mentality displayed at the end of Dr Strangelove. If we aren’t using AI then how will we get developers interested in the cutting edge? When we are hiring new recruits they will know we are behind the times. We can’t allow other companies to outpace us in this space. Without an AI-first mindset we will be losing mindshare before our project has even left the planning stage!
The arms race is just as much about keeping up with the Jones’ as it is about company security.
I tried AI code completion via Amazon Q. I quickly turned it off; half the suggestions were noise that took more time to review than actually writing it myself.
The one good use I have found is using it to tighten up my writing in design documents. I tend to use too many words to describe things. I use an LLM as a type of prose compressor on my documents. I need to re-read it afterward and make minor corrections, but it still saves me time since summarising my own writing takes much more time and energy.
That is the only net win I've had with LLMs so far.
One thing I suspect is that leadership at tech companies that would have previously been working off of direct experience with technical processes, even if they no longer work directly on their own codebases, is pretty clueless about AI coding because it's so new. All they have to go with is what they read, or sales pitches, or their experience dabbling with Cursor to build simple python utilities (which AI tools work pretty well for most of the time), and they don't see what it can and can't do on a real codebase.
These are people who are stock market shook. I'd be looking at reducing your impact from index funds or if you were stupid enough to invest in tech stocks directly cash out now.
Has anyone else found that these AI tools are worse at C++ than other languages?
My boss keeps pestering me to get people to use these tools. I find they demo nicely for other languages, but aren't very good at non-trivial C++. Besides, our department's problems aren't from writing C++ too slowly. Our problems are a flaky build system, tightly coupled code, and teams that depend on one another spread across US, Europe, and South Asia.
> I find they demo nicely for other languages, but aren't very good at non-trivial C++
I haven't found they are terribly good at non-trivial anything
They will produce a 80-90% solution that then takes me longer to find and fix that 10-20% than it would have taken me to build it from scratch myself
My suspicion is that people get so wowed by the instant and effortless 90% correct solution that they wind up downplaying how much time and effort that remaining 10% winds up taking
That's funny, there isn't any mandate for using LLMs at my company. Everybody has just quietly added them to their workflows without being told.
LLMs are like chainsaws. In the right hands, they can help you do the same job faster. In the wrong hands, it could cut off a limb. If someone drops a thousand line PR of total slop, it's not an AI failure, it's a human failure.
Personally, I use LLMs for fill-in-the-middle jobs and boosting test coverage. It's a modest benefit, but I am definitely shipping higher quality software in a shorter period of time compared to before.
They're not bad as starting points for learning new things, either. I've learned a number of new frameworks and libraries by using LLMs as a "better Google". Now I'm using it to learn Rust.
As long as you use your knowledge and experience to evaluate the output, LLMs can only help. I would posit that people committing garbage from LLMs will have no problem writing insecure and buggy code without assistance.
I caveat all this with the totally valid security and privacy concerns around sending code and data to 3rd parties that haven't been vetted rigorously. That said, local LLMs solve that problem handily and are good enough most of the time.
I think that's a good analogy. We use on-prem co-pilot and it works pretty good. I find it more useful exploring problems or debugging than I do at using it to make code.
> While 75% of company leaders thought their AI rollout over the past 12 months has been successful, only 45% of employees said the same.
The term "rollout" implies a ready-made and well thought-through implementation, but isn't AI itself too new of a technology for that? A rollout indicates to me possible wrong assumptions about the thing. "Successive evaluation and adoption" would be a better strategy. Perhaps the framing of the relationship with AI as "rollout" reflects how decision-makers think of "AI"—through a vague imagination of efficiency that seems self-evident, but actually is not.
We should also acknowledge that most modern developers aren’t as skilled or knowledgeable as they believe. The majority don’t even meet the standard of being called true engineers.
Many lack experience with distributed, high-load systems, fail to write code that meets cybersecurity requirements, and have no background in system design. On top of that, most have never worked in environments where backend and frontend are properly separated.
I’ve seen so many posts like, "How is it even possible that landing a good junior job requires 1–3 years of experience? When after 3 years of coding blogs running on Wordpress you can already be a mid on a way to senior!"
> We should also acknowledge that most modern developers aren’t as skilled or knowledgeable as they believe.
I think this is probably true. I suspect that most people think they are above average, but of course it is impossible for most people to be above average
AI tools will definitely allow below average programmers to get themselves into situations where their reach has exceeded their grasp, and the everyone else will pay for that
I see a future where there are things like US law firms, which can be big or small, but include tight rules for their members, and market power through hard negotiations. Due to the nature of tech stacks (versus The Law) I doubt that there will be very many of these firms, relatively speaking. Certainly race to the bottom cheap and exploitative coding shops will continue, sometimes based around gambling districts in various places.
it's not snake oil, there's clearly something there, if you haven't vibecoded a thing or watched a video of someone doing it, you owe it to yourself to do at least that much.
the question is how will the economy adapt to this and is it inflationary or deflationary? was developer time really the bottleneck on business development?
the question of build va buy just got a giant weight in favor of build
I was lambasted last week for claiming I could vibecode a credible threat to any SaaS company in a week, but I'm not alone in thinking that sort of capability.
Chamath Palihapitiya claims:
> Tell us what enterprise software you use and my team and I will build you an 80% feature complete version at a 90% discount.
> I was lambasted last week for claiming I could vibecode a credible threat to any SaaS company in a week, but I'm not alone in thinking that sort of capability
Then do it
If it is such a small investment, do it. Let us know how it goes
you're not listening. okay, so let's say I've built an Eventbrite clone. how do I get people to use it? I'm not a salesperson, I don't have the businesses connections or acumen to make that happen. writing a bit of code is a tiny part of running a successful business.
No shit, that's why I was calling you out for saying "I could vibecode a credible threat to any SaaS company"
No you can't, because the code is not a credible threat to any SaaS company. The business, sales, marketing and such is where they actually get their value
I've never been more excited to be a software engineer.
Fight with your org to keep the guardrails. Code coverage, tests, CI. Nothing merged without human review by someone other than yourself. AI commentary on the PR helps speed this along, but there must be accountability.
Now go have fun! If you're typing 100% the code, you are missing out!
Yesterday with Cursor and a roadmap of bullet point features, I wrote ~15,500 lines of code and deleted ~3,890 lines before lunch. I didn't blindly accept any code; I vetted everything. I read and approved every line of code - think of it like reviewing the code as it appears.
It was not perfect - but whose code is? It is my job to get it into shape once it's been generated.
For example. Occasionally it made duplicate components or classes or even files! To that: "hey, looks like there are two FrobWidget classes now, can you take a look?" and it refactored and combined them without disturbing my flow, updating all the call sites too.
When I said "can you make some tests?" it listed out bullet points of names for tests it thought relevant and generated multiple 250+ line test files while I sipped coffee. Because I'd seen the code go in, I could reason about how thorough it was being. At some point I'll ask it to add code coverage.
And this is only guiding one copy of Cursor - imagine a future in which models don't trip up. One engineer can lead a whole fleet.
We aren't paid for how much code we write; we never were. You can use this insane output multiplier to add so much value you make people gasp.
> I vetted everything. I read and approve every line of code
I do not for a second believe that you read and fully understood the impact of 15k lines of code added and 4k lines of code removed in less than four hours ("before lunch")
This is the kind of overconfident BS that is going to run us all into the ground
It helped that I worked from a clear roadmap. After the first feature I was on a roll so I just keep chugging!
It'd be rare to write this much on a single day - after lunch I was exhausted.
This codebase is on a mature stack - in particular, postgres and mature DB migration frameworks like alembic.
I've learned through this process that as long as you get the data modeling basically right, everything else can be fixed as you proceed through testing your own work, from checking each little UI feature does what the code claims, all the way to end-to-end.
Imagine you can do massive refactors just by asking the model a few questions, what does that unlock?
> I've learned through this process that as long as you get the data modeling basically right, everything else can be fixed up as you proceed through
"basically right" is not the same as "actually right"
> Imagine you can do massive refactors just by asking the model a few questions, what does that unlock?
It unlocks the ability to take an established codebase that is presumably well understood by the people who wrote it and turn it instantly into a completely new codebase that no one understands and is basically impossible to work on
You sound like a junior developer who still thinks that the job is writing lines of code
First let me tip my hat and thank you for engaging on this. I think though that we're talking past each other which isn't my goal. So I'll try to explain a little clearer.
The concerns you raise are valid. The process in my example is more rushed than I'd go through in a team context. This is a relatively immature project I'm exploring solo, so the appropriate level of "process overhead" is different than in a team context. I didn't take the time to separate that from what I'd do in a team.
In a team, there'd be upfront discussion to align on what's in the "roadmap" (ie. we'd write a spec), there'd be discussion as the work proceeds if tech-debt emerges and refactors are needed, there'd be CI, tests, coverage, and code review. A good team would own the code forever, so they'd be happy with any refactoring.
When I said massive refactors I meant within the context of one PR. It's insanely freeing to be able to draw with crayons all over your codebase, see what works, and then either `git reset --hard` or refactor it into shape.
If a refactor of existing code is needed, that's its own PR.
In summary, almost nothing changes at the level of team process. Keep all those guardrails - but use these tools to 1000x the amount of typing & exploration you can do!
it's a tool that increases efficiency if you know how to use it. learn how to do it and it'll make your job easier and you become more productive. fight or ignore it and you'll be replaced because you're not doing your job properly
don't listen to ignorant luddite technophobes on the internet, they are like all -phobes intolerant, full of hate and fearful of things they don't understand
Calling large numbers of professional software engineers "luddite technophobes" on Hacker News of all places is so ridiculously absurd I can't tell if you are trying to be serious or not
There are legitimate concerns about the effectiveness of AI tooling. Shouting that anyone expressing those concerns is a luddite is ridiculous
like, i know this is a bit of an inherent trope. but who hurt you pal?
because you seem to be so heavily invested in the success of these tools that even the possibility of there being any sort of criticism about these tools has you posting an intolerant, dismissive and hyperbolic comment yourself.
so… what gives? why the intense attachment to some technological thing? why so defensive? why the reaction? do you feel threatened by the negative criticism? why?
i'm not heavily invested. i'm annoyed by your general ignorance, intolerance and fud spreading
i have fun with ai and use it to improve my work and i'm tired of dickheads talking shit about things i love while trying to destroy my fun because of your fear. "any sort of criticism" my ass. look at your condescending response, who do you think you are?
Why do other people's criticism ("talking shit") destroy your fun? They make you unsure of your opinion?
If you believe this emotional investment in AI tools gives you a competitive edge in the long run, then so much the better for you. AI tools should work the same way with or without your love.
The only difference is that if you have fun using these tools, you'll be better motivated to use these tools better then the others. (Regardless, existing knowledge probably helps more in using these tools effectively in the longer run than pure fun.)
If your solution to any problem includes the phrase "If everyone would just X" then you don't have a real solution, because there is no future where "everyone will just X"
"Everyone just needs to learn not to trust the AI so readily"
That isn't a real solution. Everyone will not just become more skeptical and better at critical thinking
"If only everyone would just do X" where X is a thing that has a positive benefit is unrealistic.
"If only people would learn not to do Y" where Y is a thing which actively hurts them and which they can learn the negative consequences over time feels a whole lot less impossible to me.
If your co-worker pulls this crap, tell them not to. Complain about it in performance reviews. Replace them with someone who writes good code.
My company decided to fire all junior developers because "With AI seniors don't need them anymore", honestly I'm on the verge of quitting, deciding to cut devs on a software company while keeping agile coaches and product owners that basically hinder development has to be one of the dumbest decisions I have seen.
Sorry for the rant.
> Sorry for the rant.
Part of what makes this community great is that people share their personal experiences. It helps the rest of us understand what's going on in our industry, because most of us can't at several different places at once.
Snake eats itself too as doing so closed the pipeline to get more of their necessary seniors. Never mind that the people who made that call will be retired when the talent issue rears its head in 10-15 years.
Yeah, that sounds like a "get out if you can" moment tbh. Like, the best case is that you take them on their word, in which case the company is merely extremely poorly run, but realistically it's more likely to be cover for "shit, we are running out of money", particularly if it's venture-funded.
Their stated rationale is obviously BS. They’re just trying to extract more from fewer workers by intensifying your working conditions. That’s why the managers are retained but not the juniors. Sure seems like they view you guys as adversaries not partners.
Exactly. The best time to have unionised was before this move, the second best time is right now.
What? Which company thinks juniors are just "little helpers that seniors need"?
Companies that think that Scrum rituals are mysterious and important.
[dead]
Just anecdotally, I recognize and dread working with AI code nowadays. In the past when I see something stupid being done, my thought was "I wonder if there's a good reason they did it this way" but now it's increasingly "I wonder what other AI written problems are in this module".
Being forced to use AI would be a nightmare, as it means I would have to turn that same distrust onto my own code. It's one thing to use AI to basically just customize a scaffold like people do when using it to bootstrap some Next.js website or something. It's a completely different matter to have it writing code in huge and data driven existing codebases.
[dead]
We get to check a box on what AI we use when we close a ticket. I used to select "none" because most of the time that was the case, sometimes I would pick one if I used it to build some scaffolding or explore some performance issue.
But then we started having AI demos with the CTO where the presenters would say things like "I don't know how to code in python but now I don't need to! yay!" and the C level people would be very excited about this. That's when I realized that these poor developers who just want to brown nose and show off to big cheese are instead making an argument for their own demise.
Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now...
> Meanwhile I asked AI to make me a test and it mocked out everything I wanted to test, testing nothing, but passing. I wonder how much of these kinds of tests we have now
This sort of thing is what I'm most worried about with programmers who are very bullish on AI.
I don't think it's too controversial to say that most developers are much worse at reviewing code than they are at writing code
AI generated code is a code review exercise. If you accept my premise that most devs are worse at reviewing code than writing it, this should ring some alarm bells
You're spot on about developers being worse at reviewing code. With generated code, you still need to understand it if you want to maintain it.
I had another person send me some AI generated code that was close to 90% working. It was missing something simple (something like appending to an array instead of overwriting it...) The original developer could not understand or debug it. I'm afraid of the crap we're going to see over the next few years.
There are multiple factors exacerbating this even further:
1) AI generated code is "soulless". Reviewing code can be engaging if you are trying to understand your colleagues thought process. Sometimes you learn something new, sometimes you can feel superior, or politely disagree. AI ain't worth the hassle.
2) You unlearn how to write code by relying heavily on LLMs. On some level, it is similar to an IDE giving you context clues and definitions. On another, it can replace the hard work of thinking things through with mediocre yet usually workable solutions. And that's the big trap.
> Sometimes you learn something new, sometimes you can feel superior, or politely disagree. AI ain't worth the hassle.
Often I am looking for opportunities to mentor a less experienced developer
Why bother if they are just offloading their thinking to AI
> Why bother if they are just offloading their thinking to AI
Because you get to make them cry when you show them the error of their AI ways.
Consider it a teaching moment if you can show them why the AI-generated code is insufficient for production.
I think I'm somewhat decent at code reviewing, what I'm seeing currently is though that the PRs become larger and larger. Because AI just rewrites everything, even if you tell it to fix things. This leads to devs pushing more and bigger code changes. I'm convinced no company has enough senior developers to review these vast amounts of sloppy code.
You could make an analogy to the engineer babysitting a self-driving car during testing: you'd need to be a better driver than most to recognize when the machine is about to make a mistake and intervene.
This is a great point. This mocked test, easy to literally click a button, look like you did something good, but potentially add massive negative value when the code fails but the tests all pass. It can be a very helpful tool, but you need to understand the code it produces, and it seems like people are missing that point.
It is the "we copied this snippet from stack overflow without understanding it" problem, except turned up to incredibly high volumes
Pff we will just use another AI to do the review /s
I think a lot of confusion and frustration about this is the assumption that all programming is the same thing.
I have seen areas with just tons of boilerplate, straight forward UI stuff, basic test skeletons, etc that I guess could work with AI.
But personally, in 30 years I’ve just never done much of that kind of work. Probably because I find it boring. I go look for hard or novel problems, or data structures and APIs that are causing problems (overgrown and hard to use, buggy, etc). A lot of the time is just figuring out what the authors expected the code to do, and anticipating what we will need for the next few years. I don’t see AI helping much with that.
> I have seen areas with just tons of boilerplate, straight forward UI stuff, basic test skeletons, etc that I guess could work with AI
The problem is that even this basic straightforward boilerplate CRUD stuff, AI is only "kind of ok" at doing
I’m probably just old, but IMO the main problem I see with Jr devs is a lack of “taste”
I don’t see AI helping
>I’m probably just old, but IMO the main problem I see with Jr devs is a lack of “taste”
No wonder the most ardent users of AI are NextJS and Javascript devs
Can that "taste" be measured in Dollarinos?
https://en.wikipedia.org/wiki/McNamara_fallacy
Management should not be telling dev the tools to use. Tell me why I'm wrong.
Not that you're wrong, but here's the argument from the other side:
- AI is supposed to be a great productivity booster, but no one has figured out a way of actually measuring developer productivity, so we'll just use the proxy of measuring whether they're using AI.
- Developers won't want to use AI because it's threatening to them, so we'll mandate it.
- Consistency and fungibility of human resources is incredibly valuable. 0.5x developers can use AI to get them up to at least 1x. And 10x developers? We're actually better off if the AI "slows them down" and makes their work output look more like everyone else's. 10x developers are actually a liability, because it can be hard to quantify and measure their contributions, and so when things become worse when they end up leaving, it feels intangible and impossible to fix.
If you have lazy devs, sometimes you need to convince (or coerce, or force) them to get them out of their comfort zone and try something new. But typically this is done because management wants more while giving nothing in return which is why no one tolerates these suggestions.
The other reason, which has been used to prevent devs from fixing bad code, is that making money is more important and sometimes that means moving away from costlier tools, or choosing tools because of specific business arrangements.
Couldn't agree more, but when the company is paying big money for a tool you can bet they're going to make sure people are using it
Is that the RTO mandates because we have so many unused 5 year office leases?
I see, it's part deux.
"We made a poor investment and now we're making it our employees problem" is absolutely a common outcome from business leaders yeah
The only thing that trickles down is bullshit unfortunately
Standardization is valuable.
An extreme example of your position would be a shop where everyone uses a different language. That's obviously untenable.
A more reasonable example might be choice of editor. I'm of the opinion that management should provide everyone a standard editor, then not mandate it's use. This means anyone can hop onto any device and know how to use it, even if the device owner has a super-customized vim setup. Folks who become familiar with the standard editor are also better positioned to help their colleagues troubleshoot their setups.
I don't see how this applies to AI assistants much if at all. It feels like mandating devs run all their ideas by a particular intern. If said intern isn't an architect then there's no value in having a single point of contact.
There is organizational efficiency around everyone using the same tools. Management can pick a set of tools to focus on.
Not all developers know how to use all tools. Management providing education on tools can increase their efficiency.
Developers do not stay up to date with all available tools that exist. Management providing better tools can make people more efficient.
That justifies using common tools and standards, but not why management in particular should be doing the selection.
Everywhere mature that I've worked, management delegated the task of choosing common tools to the senior / staff engineers, as well as decisions around when to make changes or exceptions.
The places that didn't do this were engineer-founded, or were dysfunctional enough that managers could splurge on massive contracts and then force people to use the paid-for tools to justify the contract post-hoc. Every one of these were examples of what not to do.
/can/ is doing some heavy lifting here. Management is just as capable of forcing tools that look good in 15 minute demos but suck for power users on developers.
I mean, ideally, management should not be telling me anything and should let me do my job in splendid, peaceful isolation. In practice, however, management tells me how to do all sorts of things, all the time, and they're responsible for me continuing to get paid, so I have to listen.
You're also responsible if they get paid so if the product sinks it's game over. So they have to listen too.
[flagged]
You're answering a question that wasn't asked so you can bring your view on unionization into the conversation. The implicit question is whether management should, not whether they can.
It's about power to affect change in an organization. The more power you have the better systems you can establish and sustain. Unions should be brought up constantly in these threads about AI mandates and "leadership" directions.
Unions might not be the best solution, or the most practical. I'm all ears for better ways to fight back against bogus leadership. How else can software developers advocate for their interests in an environment where their power is sharply declining (non performance based layoffs, "performance" based layoffs, reduction in junior hiring, reduction in in-office perks, reduction in total comp, etc)?
Unionization is not possible while there are hundreds of millions of eager scab workers in other countries.
Does this imply US salaries aren't possible either?
NAFTA wasn't inevitable. Globalization has largely been about (some) people making money as we all race to the bottom, and that has meant a steady supply of desperate people willing to sell their labor for less than the current crop of workers.
Arguably one motivation for Trump's tariffs is to onshore jobs. It's an incredibly brutish and stupid way to go about it, though. A much better way would have been to oppose NAFTA and similar global a marketplace policies, as Bernie Sanders has done for decades now (and has largely been in a pretty lonely position).
Maybe we could tariff offshore code?
Hey now but then we couldn't have all the crypto scams and increasingly complex and insecure software layers at bottom dollar! Forgot to mention all the spyware! How are the executives supposed to get their illegal kickbacks? Won't somebody think of the nepobabies!?
"When AI is able to deliver coding tasks based on a prompt, there won’t be enough copies of the Mythical Man Month to dissuade business folks from trying to accelerate road maps and product strategies by provisioning fleets of AI coding agents."
Submitted yesterday, but no luck :D
https://varoa.net/2025/04/07/ai-generated-code.html
I like your blog!
Thanks!
LLMs require comparatively little training to use, and in fact training (like how to optimize prompts etc.) is probably a waste of time because the model behavior and interfaces change so frequently.
This puts them in the class of tools where the benefit is obvious once there is actually a real benefit (and not, say, the foreshadowing of some future benefit). This sort of tool doesn't require a mandate to gain adoption.
So, in the subdomains of software development where the LLMs are already useful, developers will naturally pick them up, assuming the business has secured the policy support and funding. In the areas where they aren't useful, businesses should trust developers and not waste their time with mandates.
And I say this as someone who uses LLMs all day every day for Python, Go, Bash, C/C++, pretty much everything. But as an active user I see the limitations constantly and wouldn't mandate their use upon anyone.
prediction: Prompt engineering will get so complicated. There will be a movement towards somehow telling the computer exactly what to do using a logical language of some sort.
Yes, some sort of... language for programs, if you will. It could be carefully structured in a way that avoids ambiguity and encourages consistency.
[dead]
This pattern is maybe 20% about AI specifically and 80% about low-trust leadership.
It’s about the “We can’t allow a mineshaft gap!” mentality displayed at the end of Dr Strangelove. If we aren’t using AI then how will we get developers interested in the cutting edge? When we are hiring new recruits they will know we are behind the times. We can’t allow other companies to outpace us in this space. Without an AI-first mindset we will be losing mindshare before our project has even left the planning stage!
The arms race is just as much about keeping up with the Jones’ as it is about company security.
I tried AI code completion via Amazon Q. I quickly turned it off; half the suggestions were noise that took more time to review than actually writing it myself.
The one good use I have found is using it to tighten up my writing in design documents. I tend to use too many words to describe things. I use an LLM as a type of prose compressor on my documents. I need to re-read it afterward and make minor corrections, but it still saves me time since summarising my own writing takes much more time and energy.
That is the only net win I've had with LLMs so far.
One thing I suspect is that leadership at tech companies that would have previously been working off of direct experience with technical processes, even if they no longer work directly on their own codebases, is pretty clueless about AI coding because it's so new. All they have to go with is what they read, or sales pitches, or their experience dabbling with Cursor to build simple python utilities (which AI tools work pretty well for most of the time), and they don't see what it can and can't do on a real codebase.
These are people who are stock market shook. I'd be looking at reducing your impact from index funds or if you were stupid enough to invest in tech stocks directly cash out now.
Has anyone else found that these AI tools are worse at C++ than other languages?
My boss keeps pestering me to get people to use these tools. I find they demo nicely for other languages, but aren't very good at non-trivial C++. Besides, our department's problems aren't from writing C++ too slowly. Our problems are a flaky build system, tightly coupled code, and teams that depend on one another spread across US, Europe, and South Asia.
> I find they demo nicely for other languages, but aren't very good at non-trivial C++
I haven't found they are terribly good at non-trivial anything
They will produce a 80-90% solution that then takes me longer to find and fix that 10-20% than it would have taken me to build it from scratch myself
My suspicion is that people get so wowed by the instant and effortless 90% correct solution that they wind up downplaying how much time and effort that remaining 10% winds up taking
That's funny, there isn't any mandate for using LLMs at my company. Everybody has just quietly added them to their workflows without being told.
LLMs are like chainsaws. In the right hands, they can help you do the same job faster. In the wrong hands, it could cut off a limb. If someone drops a thousand line PR of total slop, it's not an AI failure, it's a human failure.
Personally, I use LLMs for fill-in-the-middle jobs and boosting test coverage. It's a modest benefit, but I am definitely shipping higher quality software in a shorter period of time compared to before.
They're not bad as starting points for learning new things, either. I've learned a number of new frameworks and libraries by using LLMs as a "better Google". Now I'm using it to learn Rust.
As long as you use your knowledge and experience to evaluate the output, LLMs can only help. I would posit that people committing garbage from LLMs will have no problem writing insecure and buggy code without assistance.
I caveat all this with the totally valid security and privacy concerns around sending code and data to 3rd parties that haven't been vetted rigorously. That said, local LLMs solve that problem handily and are good enough most of the time.
I think that's a good analogy. We use on-prem co-pilot and it works pretty good. I find it more useful exploring problems or debugging than I do at using it to make code.
> While 75% of company leaders thought their AI rollout over the past 12 months has been successful, only 45% of employees said the same.
The term "rollout" implies a ready-made and well thought-through implementation, but isn't AI itself too new of a technology for that? A rollout indicates to me possible wrong assumptions about the thing. "Successive evaluation and adoption" would be a better strategy. Perhaps the framing of the relationship with AI as "rollout" reflects how decision-makers think of "AI"—through a vague imagination of efficiency that seems self-evident, but actually is not.
We should also acknowledge that most modern developers aren’t as skilled or knowledgeable as they believe. The majority don’t even meet the standard of being called true engineers.
Many lack experience with distributed, high-load systems, fail to write code that meets cybersecurity requirements, and have no background in system design. On top of that, most have never worked in environments where backend and frontend are properly separated.
I’ve seen so many posts like, "How is it even possible that landing a good junior job requires 1–3 years of experience? When after 3 years of coding blogs running on Wordpress you can already be a mid on a way to senior!"
> We should also acknowledge that most modern developers aren’t as skilled or knowledgeable as they believe.
I think this is probably true. I suspect that most people think they are above average, but of course it is impossible for most people to be above average
AI tools will definitely allow below average programmers to get themselves into situations where their reach has exceeded their grasp, and the everyone else will pay for that
I see a future where there are things like US law firms, which can be big or small, but include tight rules for their members, and market power through hard negotiations. Due to the nature of tech stacks (versus The Law) I doubt that there will be very many of these firms, relatively speaking. Certainly race to the bottom cheap and exploitative coding shops will continue, sometimes based around gambling districts in various places.
The folks who made buggy whips were suspicious of the horseless carriage; and eventually had to go into another line of work.
Question is, is AI-coding a new horseless carriage or a new brand of snake oil?
My guess is that something of lasting worth will grow out of it; and that vibe coding will prove to be snake oil, at least for production code.
In the meanwhile, I'm just glad I'm approaching retirement age--and that I don't work for a short-sighted corporation.
Or a new Zeppelin. An elegant future quickly surpassed by alternatives, remaining mostly as steampunk dreams.
Or a new Saturn V. An engineering marvel only possible by spending huge amounts of money.
Or a new monorail. Rarely cost competitive to more traditional alternatives.
it's not snake oil, there's clearly something there, if you haven't vibecoded a thing or watched a video of someone doing it, you owe it to yourself to do at least that much.
the question is how will the economy adapt to this and is it inflationary or deflationary? was developer time really the bottleneck on business development?
the question of build va buy just got a giant weight in favor of build
I was lambasted last week for claiming I could vibecode a credible threat to any SaaS company in a week, but I'm not alone in thinking that sort of capability.
Chamath Palihapitiya claims:
> Tell us what enterprise software you use and my team and I will build you an 80% feature complete version at a 90% discount.
https://www.8090.inc/
https://news.ycombinator.com/item?id=38960001
> I was lambasted last week for claiming I could vibecode a credible threat to any SaaS company in a week, but I'm not alone in thinking that sort of capability
Then do it
If it is such a small investment, do it. Let us know how it goes
you're not listening. okay, so let's say I've built an Eventbrite clone. how do I get people to use it? I'm not a salesperson, I don't have the businesses connections or acumen to make that happen. writing a bit of code is a tiny part of running a successful business.
No shit, that's why I was calling you out for saying "I could vibecode a credible threat to any SaaS company"
No you can't, because the code is not a credible threat to any SaaS company. The business, sales, marketing and such is where they actually get their value
Ah, you see that's where you bring in vibe sales and marketing!
but you do buy that LLMs vibecoding is currently able to build a large amount of functionality in a very short amount of time, yeah?
I buy that it is able to produce a large volume of code, sure
Sometimes it might even be functioning code. Infinite monkeys with infinite typewriters eventually produce something right?
Looks like instant technical debt to me.
YC has a big role to play in this mess. Gary Tan really needs to step down. This is what happens when you put a non hacker in charge of building tech.
I've never been more excited to be a software engineer.
Fight with your org to keep the guardrails. Code coverage, tests, CI. Nothing merged without human review by someone other than yourself. AI commentary on the PR helps speed this along, but there must be accountability.
Now go have fun! If you're typing 100% the code, you are missing out!
Yesterday with Cursor and a roadmap of bullet point features, I wrote ~15,500 lines of code and deleted ~3,890 lines before lunch. I didn't blindly accept any code; I vetted everything. I read and approved every line of code - think of it like reviewing the code as it appears.
It was not perfect - but whose code is? It is my job to get it into shape once it's been generated.
For example. Occasionally it made duplicate components or classes or even files! To that: "hey, looks like there are two FrobWidget classes now, can you take a look?" and it refactored and combined them without disturbing my flow, updating all the call sites too.
When I said "can you make some tests?" it listed out bullet points of names for tests it thought relevant and generated multiple 250+ line test files while I sipped coffee. Because I'd seen the code go in, I could reason about how thorough it was being. At some point I'll ask it to add code coverage.
And this is only guiding one copy of Cursor - imagine a future in which models don't trip up. One engineer can lead a whole fleet.
We aren't paid for how much code we write; we never were. You can use this insane output multiplier to add so much value you make people gasp.
> I vetted everything. I read and approve every line of code
I do not for a second believe that you read and fully understood the impact of 15k lines of code added and 4k lines of code removed in less than four hours ("before lunch")
This is the kind of overconfident BS that is going to run us all into the ground
It helped that I worked from a clear roadmap. After the first feature I was on a roll so I just keep chugging!
It'd be rare to write this much on a single day - after lunch I was exhausted.
This codebase is on a mature stack - in particular, postgres and mature DB migration frameworks like alembic.
I've learned through this process that as long as you get the data modeling basically right, everything else can be fixed as you proceed through testing your own work, from checking each little UI feature does what the code claims, all the way to end-to-end.
Imagine you can do massive refactors just by asking the model a few questions, what does that unlock?
> I've learned through this process that as long as you get the data modeling basically right, everything else can be fixed up as you proceed through
"basically right" is not the same as "actually right"
> Imagine you can do massive refactors just by asking the model a few questions, what does that unlock?
It unlocks the ability to take an established codebase that is presumably well understood by the people who wrote it and turn it instantly into a completely new codebase that no one understands and is basically impossible to work on
You sound like a junior developer who still thinks that the job is writing lines of code
First let me tip my hat and thank you for engaging on this. I think though that we're talking past each other which isn't my goal. So I'll try to explain a little clearer.
The concerns you raise are valid. The process in my example is more rushed than I'd go through in a team context. This is a relatively immature project I'm exploring solo, so the appropriate level of "process overhead" is different than in a team context. I didn't take the time to separate that from what I'd do in a team.
In a team, there'd be upfront discussion to align on what's in the "roadmap" (ie. we'd write a spec), there'd be discussion as the work proceeds if tech-debt emerges and refactors are needed, there'd be CI, tests, coverage, and code review. A good team would own the code forever, so they'd be happy with any refactoring.
When I said massive refactors I meant within the context of one PR. It's insanely freeing to be able to draw with crayons all over your codebase, see what works, and then either `git reset --hard` or refactor it into shape.
If a refactor of existing code is needed, that's its own PR.
In summary, almost nothing changes at the level of team process. Keep all those guardrails - but use these tools to 1000x the amount of typing & exploration you can do!
it's a tool that increases efficiency if you know how to use it. learn how to do it and it'll make your job easier and you become more productive. fight or ignore it and you'll be replaced because you're not doing your job properly
don't listen to ignorant luddite technophobes on the internet, they are like all -phobes intolerant, full of hate and fearful of things they don't understand
> don't listen to ignorant luddite technophobes
Calling large numbers of professional software engineers "luddite technophobes" on Hacker News of all places is so ridiculously absurd I can't tell if you are trying to be serious or not
There are legitimate concerns about the effectiveness of AI tooling. Shouting that anyone expressing those concerns is a luddite is ridiculous
Please get a grip
He's trolling.
It's so hard to tell anymore, how do you create parody when the ceiling has gone away for the stupidity of sincere mainstream takes?
I envy your optimism...
when the shoe fits
who hurt you?
like, i know this is a bit of an inherent trope. but who hurt you pal?
because you seem to be so heavily invested in the success of these tools that even the possibility of there being any sort of criticism about these tools has you posting an intolerant, dismissive and hyperbolic comment yourself.
so… what gives? why the intense attachment to some technological thing? why so defensive? why the reaction? do you feel threatened by the negative criticism? why?
i'm not heavily invested. i'm annoyed by your general ignorance, intolerance and fud spreading
i have fun with ai and use it to improve my work and i'm tired of dickheads talking shit about things i love while trying to destroy my fun because of your fear. "any sort of criticism" my ass. look at your condescending response, who do you think you are?
Why do other people's criticism ("talking shit") destroy your fun? They make you unsure of your opinion? If you believe this emotional investment in AI tools gives you a competitive edge in the long run, then so much the better for you. AI tools should work the same way with or without your love. The only difference is that if you have fun using these tools, you'll be better motivated to use these tools better then the others. (Regardless, existing knowledge probably helps more in using these tools effectively in the longer run than pure fun.)
> if you know how to use it.
This is the problem though, too many people are too trusting of the output.
Furthermore I see that AI is optimizing exactly for that.. the "acceptance rate" doesn't actually measure whether the code is correct..
Right, and those people really need to learn not to do that.
This reminds me of an old post somewhere
"Everyone will not just"
If your solution to any problem includes the phrase "If everyone would just X" then you don't have a real solution, because there is no future where "everyone will just X"
"Everyone just needs to learn not to trust the AI so readily"
That isn't a real solution. Everyone will not just become more skeptical and better at critical thinking
I don't think those are the same problem.
"If only everyone would just do X" where X is a thing that has a positive benefit is unrealistic.
"If only people would learn not to do Y" where Y is a thing which actively hurts them and which they can learn the negative consequences over time feels a whole lot less impossible to me.
If your co-worker pulls this crap, tell them not to. Complain about it in performance reviews. Replace them with someone who writes good code.
> Complain about it in performance reviews
The last time I brought up concerns with a coworkers performance in a review I was fired a couple of weeks later
It turned out he was good buddies with my boss's boss
There are a lot of mechanisms in society preventing a lot of people from feeling the negative consequences of their poor work
I wish it were so easy as you say