Ask HN: How to deal with long vibe-coded PRs?

186 points philipptaabout 1 month ago 349 comments
AI-Powered Research Assistant
Analyze the comments and suggest other relevant articles to read.

Comments

Yizahi|about 1 month ago
Alternative to the reject and request rewrite approach, which may not work in the corporation environment. You schedule a really long video call with the offending person, with the agenda politely describing that for such a huge and extensive change, a collaborative meeting is required. You then notify your lead that new huge task has arrived which will take X hours from you, so if he wishes to re-prioritize tasks, he is welcome. And then if the meeting happen, you literally go line by line, demanding that author explain them to you. And if explanation or a meeting are refused, you can reject RP with a clear explanation why.
a4isms|about 1 month ago
This is the answer, and it has been the answer going back to the before times when we didn't have pull requests but we did in-person team code reviews before merging branches (yes, really). A massive, complicated merge without documentation and extensive support from other interested/impacted people and teams justifying things like a parser DSL? That is always going to be a problem whether AI generated it or the developer locked their office door and went on a three-day manic coding binge.

AI doesn't directly make this stuff worse, it accelerates a team's journey towards embracing engineering practices around the code being written by humans or LLMs.

djmips|about 1 month ago
we still do this on very big fundamental changes.
bb88|about 1 month ago
I would recommend inviting the person, his manager, and your manager.

To start things off the meeting, I would say something like, "To me this is a surprising PR. I would expect it to be maybe 500(e.g.) lines including unit tests. Why does it need to be this complicated?"

If your manager just folds over, I would just accept it, because he's probably so beat down by the organization that he's not going to stick up for his employees anyway. At that point, it's time to look for another job.

But if the managers listen to their employees, and you have a better alternative, and your arguments are reasonable, it should be fine.

ericmcer|about 1 month ago
It doesn't even need to be a long call, just a one off "hey this is a big PR, got a sec to run me through it" works.

Usually within a few questions the answer "the AI wrote it that way" will come out.

Which feels bananas to me, like you don't understand how the code you PR is doing what you want? That would feel like showing up to work with no pants on for me.

iamleppert|about 1 month ago
Oh, how I would love to work with you. I'd drown you in more meetings, documentation on code (LLM generated of course) than you could ever imagine.

You can use the LLM to generate as much documentation on the changes as you want. Just give it your PR. If someone tries to reject your vibe coded AI slop, just generate more slop documentation to drown them in it. It works every time.

If they push back, report them to their manager for not being "AI first" and a team player.

a4isms|about 1 month ago
If we look at this as a system with work flowing through it, the "theory of constraints" quickly tells us that code review is the bottleneck, and that speeding up code generation actually lowers system throughput.

This is not new stuff, Goldratt warned us about this twenty+ years ago.

https://en.wikipedia.org/wiki/Theory_of_constraints

zeroCalories|about 1 month ago
When my manager pings me about it I'll just show him your ai slop and tell him we'll be liable for all the bugs and production issues related to this, in addition to maintaining it. Then let him make the choice. Escalate if needed.
watwut|about 1 month ago
Sir, I see a big senior manager future in you. My hats down.
Msurrow|about 1 month ago
See, now that’s a proper f** you in corporate-speak.
ekjhgkejhgk|about 1 month ago
This guy corporates.
tokioyoyo|about 1 month ago
Honestly, this approach would probably get you fired eventually for non-coop behaviour in every company I’ve worked at.

AI slop code is becoming the go-to way for a lot of written code. At some point it’ll make more sense to find a solution to the problem (“how to be confident in slop code”), rather than going against the financial motives of the business owners (cut expenses, maximize profit somehow through AI). I’m not sure if it’s right or wrong, but it is what it is.

pavelai|30 days ago
> Honestly, this approach would probably get you fired eventually for non-coop behaviour in every company I’ve worked at

What's so non-coop in this? This huge PR is a non-coop work and it requires to be fixed. Just imagine if someone locks the work of the whole branch just to review 9000 LOC, which they even didn't wrote. It's just no-go option

And even if there would be an AI which explains what this code is doing, people wouldn't be able to check it manually in reasonable time. It just enormous piece of work. So until there is a solution to this, such PRs should be declined and rewritten

Also it seems like the worker who brought this code doesn't know how evaluate the complexity of the task relatively to the solution, so there is a question about their qualification and level

embedding-shape|about 1 month ago
> Honestly, this approach would probably get you fired eventually for non-coop behaviour in every company I’ve worked at.

I don't think I've ever worked in a company that would fire someone for something like that. Maybe you'd get a scheduled conversation to talk about it, to try to resolve whatever is going on (in this case verbose AI slop PRs), since obviously something is going wrong when people start engaging in malicious compliance.

But then I also never worked in a country where people can be fired for whatever reason, it's always been a legal requirement (in the countries I've lived at least) that the firing needs to be properly justified and explained, and firing someone like that wouldn't be justified.

fifilura|about 1 month ago
Wow, 636 karma points in 8 days since registering! You are on a roll! (FWIW i upped this comment).
embedding-shape|about 1 month ago
Hah, don't say that, just makes my current computer and HN obsession even more clear, I've hidden upvotes in my browser just to avoid realizing that! (jokes aside, I do probably spend too much time on HN at this moment...)
tokioyoyo|about 1 month ago
It wouldn’t be a direct firing, but if the majority of the company/team is pro-AI, people will complain privately for “being a person hard to work with”. Eventually, after 6 months of repetitive passive aggressiveness, you’ll be let go or get paycuts.
userbinator|about 1 month ago
If the company is like that, chances are you wanted to find a better job anyway.
throwawayffffas|about 1 month ago
> How would you go about reviewing a PR like this?

Depends on the context. Is this from:

1. A colleague in your workplace. You go "Hey ____, That's kind of a big PR, I am not sure I can review this in a reasonable time frame can you split it up to more manageable pieces? PS: Do we really need a DSL for this?"

2. A new contributor to your open source project. You go "Hey ____, Thanks for your interest in helping us develop X. Unfortunately we don't have the resources to go over such a large PR. If you are still interested in helping please consider taking a swing at one of our existing issues that can be found here."

3. A contributor you already know. You go "Hey I can't review this ___, its just too long. Can we break it up to smaller parts?"

Regardless of the situation be honest, and point out you just can't review that long a PR.

ljm|about 1 month ago
If it’s the first one I’d be going a step further back to see how the work was defined. More often than not I’d expect the PR comes from a ticket that is too broad in scope and could have been broken down with a bit of architectural thinking.

The problem being that once someone has put together a PR, it’s often too late to go back to the serious thinking step and you end up having to massage the solution into something workable.

MartijnHols|about 1 month ago
Telling a new contributor no thank you is hard. Open source contributors are hard to come by, and so I’ve always dealt with PRs like this (albeit before AI days but from people who had never written a line of code before their PR) by leaving a message that it’s a huge PR so it’s going to take a while to review it and a request to make smaller PRs in the future. A couple of times I ended up leaving over a hundred review comments, but most times they were all fixed and the contributor stuck around with many better PRs later.
throwawayffffas|about 1 month ago
> Telling a new contributor no thank you is hard.

In life in general having the wherewithal to say no is a superpower. While I appreciate the concern about alienating newcomers, you don't start contributing to an existing project by adding 9k lines of the features you care about. I have not run any open source projects that accept external contributions, but my understanding in general is that you need to demonstrate that you will stick around before being trusted with just adding large features. All code is technical debt, you can't just take on every drive by pull request in hopes they will come back to fix it when it brakes a year down the line.

latexr|about 1 month ago
The vast majority of PRs are bad. They could even be described as “selfish” in the sense that the “contributor” is haphazardly making whatever change minimally fixes their exact use case without consideration for the project’s style, health, usability, or other users. This isn’t outright malicious or even deliberately inconsiderate, but it still has a negative effect.

Refusing such a PR (which, again, is most of them) is easy. But it is also time consuming if you don’t want to be rude. Everything you point out as inadequate is a chance for them to rebut or “fix” in a way which is again unsatisfactory, which only leads to more frustration and wasted time. The solution is to be specific about the project’s goals but vague about the code. Explain why you feel the change doesn‘t align with what you want for the project, but don’t critique specific lines.

There are, of course, exceptions. Even when I refuse a PR, if it’s clear it was from a novice with good intentions and making an effort to learn, I’ll still explain the issues at length so they can improve. If it’s someone who obviously used an LLM, didn’t understand anything about what they did and called it a day, I’ll still be polite in my rejection but I’ll also block them.

Ginger Bill (creator of Odin) talked about PRs on a podcast a while back and I found myself agreeing in full.

https://www.youtube.com/watch?v=0mbrLxAT_QI&t=3359s

eru|about 1 month ago
Git is flexible enough that you can tell people to break up their PR. They don't have to redo all their work.

If you want to be really nice, you can even give them help in breaking up their PR.

Cthulhu_|about 1 month ago
Yeah exactly, the OP describes a completely new service built start to finish all in one merge request, where normally you'd start with a proposal and work from there.
eru|about 1 month ago
You can even create the proposal retroactively from the PR, if you already have the PR.
viccis|about 1 month ago
Open source? Close it and ask them resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included.

For work? Close it and remind them that their AI velocity doesn't save the company time if it takes me many hours (or even days depending on the complexity of the 9k lines) to review something intended to be merged into an important service. Ask them to resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included. If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.

oarsinsync|about 1 month ago
> If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.

Another equally correct approach (given the circumstances of the organisation) is to get a different AISlopBot to do the review for you, so that you spend as much time reviewing as the person who submitted the PR did coding.

ffsm8|about 1 month ago
That only works if you're not personally responsible for the code you review, too.
Hamuko|about 1 month ago
If they're okay with vibe-coded code, they should be fine with vibe-coded reviews too. You really only should be in a situation where you have more responsibility over your reviews than other people have for their code if you're in charge, and if you're in charge, just ban the practice.
stevula|about 1 month ago
The problem is other people/teams making PRs to your code that you then have to maintain or fix later. It’s in your interest not to half-ass the review, creating an asymmetric amount of work for you vs them.
throwup238|about 1 month ago
Just don’t give the AI agent an “approve_pr” tool. It can only comment or reject.
kortilla|about 1 month ago
But then what? At the end it’s still on you to approve and you have no idea what is hiding in the code.
jjcob|about 1 month ago
You don't approve it. You just slowly grind the submitter down with minor feedback. At some point they lose interest and after a year you can close the PR, or ask the submitter to open a new PR.
adastra22|about 1 month ago
I hope you don’t actually do this to people.
jjcob|about 1 month ago
It works best if you don't reply immediately. I recommend successively increasing the response delay. Keep it short enough to make sure that they don't start bugging you on other channels, but long enough to make sure they have time to cool down and question if the continued effort is really worth it.

As long as the response delay increases at least geometrically, there is a finite bound to the amount of work required to deal with a pull request that you will never merge.

oarsinsync|about 1 month ago
Tragically, when you are organisationally impaired from saying 'no', this is the only way (besides, you know, quitting and getting a new job).

It's absolutely soul crushing when you're motivated to do a good job, but have a few colleagues around you that have differing priorities, and aren't empowered to do the right thing, even when management agrees with you.

adastra22|about 1 month ago
I am both an open source maintainer and contributor. This is absolutely despicable behavior. You are purposefully wasting the time of a contributor for no other reason than your own fear of saying “no.”

If you’re not going to merge something, just ficking say so.

xigoi|about 1 month ago
Wasting the time of someone who put no effort whatsoever into their work and wants you to put in a lot of effort? Fine by me.
thunderfork|about 1 month ago
If you've read the thread, the strategy you're replying to is about a workplace scenario where outright rejection is, for whatever reason, forbidden; not an open source situation where "no" is readily available.
adastra22|about 1 month ago
It makes even less sense in a work context either. This behavior will permanently alienate this user & potential customer. I’ve seen this exact scenario play out many times before.
estimator7292|about 1 month ago
Why would it be acceptable for the sumbitter to behave this way and not the reviewer? We do have AI "assisted" submitters behaving exactly like this and acting irate when forced to actually reflect on the turd they're trying to shove into my inbox
lukan|about 1 month ago
If people do this to him? How else to react?

The context here is lots of vibe coded garbage thrown at the reviewer.

adastra22|about 1 month ago
It takes less time and effort to close with an explanation why. It is going out of your way to waste the time of a contributor.
lukan|about 1 month ago
Context here is a corporate scenario where just closing is not possible.
adastra22|about 1 month ago
Why waste anyone’s bandwidth on this? As maintainer of some open source projects, there are no circumstances in which I would accept a 9kLOC drive by contribution like this. State so and close it.
oarsinsync|about 1 month ago
> Why waste anyone’s bandwidth on this?

The conditional was: If my boss forces me to review it

> As maintainer of some open source projects, there are no circumstances in which...

...you would force yourself to do anything that you don't want to do. Your approach is absolutely correct for the organisational circumstances in which this might happen to you.

There are other organisational circumstances where being the squeaky wheel, even when it's the right thing to do for the business, will be the wrong thing for you personally. It's valuable to identify when you're standing in front of a steamroller, and get out of the way.

rob74|about 1 month ago
Ok, but then I would also prefer OP's approach: if things reach this level of dysfunction, you should really consider looking for another job...
charlieyu1|about 1 month ago
Boss forced me? Good. I’ll take a look at the first 100-200 lines, find 3-5 critical or deadly errors, document it clearly and write to the boss how this vibe coding shit is wasting so much of my time
adastra22|about 1 month ago
Have a backbone. I would seriously quit on the spot if requested to break my professional integrity with respect to open source development. I have been in this situation before too, so I’m not just saying it.
harimau777|about 1 month ago
In this job market that's not an option for the majority of people.
beefnugs|about 1 month ago
Funny all the savings on employees, means they will have to hire specialized ai-code-reviewers now
khannn|about 1 month ago
Makes me want to write my own AI bot that brutally tears into any pr so I can reject it
krackers|about 1 month ago
> then I do so and start quietly applying

In this job market? And where pretty much every company seems to be following the top-down push for AI-driven "velocity"?

viccis|about 1 month ago
That's why I would start applying instead of just quitting. There are plenty of companies that use AI responsibly or not much at all.
xeonmc|about 1 month ago
This is why we need a programmer union, so that coders can collectively reject reverse-centaur slopwork, like miners rejecting asbestos mines or workers refusing to fix dangerous machines while it’s running.
SecondHandTofu|about 1 month ago
More political arguments about the other effects of unions aside - I've never heard a good answer for why unions are good for workers in professions with wide ranges of skill and impact, such as lots of types of knowledge work. Do you have an answer for that?

Roles that are more fungible, train drivers, factory workers, I can see the case from the worker's perspective, even if I think there are externalities.

But I can't even see it from a worker's perspective in roles such as software or sales, why would anyone good want to work in an environment where much worse workers are protected, compensation is more levelised etc?

I'm assuming this will boil down to some unspoken values differences but still thought I'd ask.

jon-wood|about 1 month ago
A union does whatever its members want the union to do. I'd argue that an environment where pay negotiation is a case of every person for themselves isn't actually good for anyone but if the majority of members disagree with me then the union won't get involved in pay. If they wanted to they could scope the union's responsibility purely to being notified of budget reductions/redundancies and given a seat at the table when working out how to handle them.
bluGill|about 1 month ago
A union works best when workers see they are all in it together. There are lots of unions, but it is much harder for them to be powerful when members see defecting as helping them. There is a reason unions are most common in labor areas where everyone is the same. You can't be a better bus driver than someone else (either you are bad enough to fire or you are as good as everyone else). The assembly line is as good as the worst/slowest person on it, so there is no advantage in being faster at putting bolts in, or whatever you do (unions can sometimes push safety standards, but also comes from others who have the union take credit)
eru|about 1 month ago
> The assembly line is as good as the worst/slowest person on it, so there is no advantage in being faster at putting bolts in, or whatever you do [...]

I guess you have no experience with assembly lines?

> (unions can sometimes push safety standards, but also comes from others who have the union take credit)

Btw, health and safety are what economists call a 'normal good'. See https://en.wikipedia.org/wiki/Normal_good

> In economics, a normal good is a type of a good for which consumers increase their demand due to an increase in income, unlike inferior goods, for which the opposite is observed. When there is an increase in a person's income, for example due to a wage rise, a good for which the demand rises due to the wage increase, is referred as a normal good. Conversely, the demand for normal goods declines when the income decreases, for example due to a wage decrease or layoffs.

That explains fairly well, why rich countries all have more-or-less similar health and safety standards despite very different histories and especially histories of labour activism, and why poor countries fare worse in this respect--even if some of them have laws on the books that are just as strict.

bluGill|about 1 month ago
> I guess you have no experience with assembly lines?

I've spent a few weeks on one, so not zero, but not a lot.

Note that I simplified greatly a real assembly line, and there are lots of different lines with different configurations. Nearly everything is multiple lines. There are often buffers along the way so that you can get ahead of the line by a little (or if you need to use the restroom the line continues). Sometimes there are two people in a station with the understanding that if both are perfect they are 80-90% busy (or some such number), but if someone is slow the other can help up. Lines often go slower than possible because of safety. There are likely more issues, but there is a point where the line is waiting on the slow person.

With the above in mind, what am I missing?

eru|29 days ago
I'm not a great expert on assembly lines, to be honest. But two things:

- From theoretical considerations (less important): you can be better not just by improving average speed, but also by reducing variance (ie being more reliable) and improving quality.

- A practical consideration (more important): from what I recall, even people on assembly lines are often paid piece rates. Ie they are paid more or less proportional to their output. Assuming companies aren't complete idiots, we can assume that they have a good reason for rewarding individuals for higher output? That seems to be in at least mild contradiction to "The assembly line is as good as the worst/slowest person on it, [...]"

eru|about 1 month ago
> A union does whatever its members want the union to do.

Just like a democracy does whatever its voters want it to do?..

Different people want different things.

> I'd argue that an environment where pay negotiation is a case of every person for themselves isn't actually good for anyone but if the majority of members disagree with me then the union won't get involved in pay.

Well, I feel for the minority that doesn't want the union to get involved in their affairs.

bravetraveler|about 1 month ago
Not a developer, but close enough: so that 'good' stays 'good' and doesn't become 'expected'. Or, said another way, I can enjoy protections too. Automation allows us to do more, actually doing more isn't necessary: remember the tools/why they were made. Yet expectations continue to ride an escalator.

I don't know why one would want to maintain a system of 'look how high I can still jump after all these years, reward please'. Again, expectations: they rise faster than the rewards.

The adversarial framing with coworkers is confusing, discipline is a different matter from collective bargaining.

sfn42|about 1 month ago
> why would anyone good want to work in an environment where much worse workers are protected

The "much worse workers" are the majority. That's why you see everyone complaining about technical interviews and such - those of us who crush the interviews and get the jobs don't mind.

kaoD|about 1 month ago
How old are you? I'm in your boat but I suspect we'll change our tune when we get older.
sfn42|about 1 month ago
My 40s aren't too far off. I don't expect to lose that much of my ability
kaoD|about 1 month ago
Yeah I'm not worried about my ability, but the perceived value from employers. We're probably in the sweet spot where we're still "young" but also very experienced.
sfn42|about 1 month ago
That would be quite ridiculous in my opinion. Most of my peers hardly stay in one job for more than 2-3 years anyway, so unless you're retiring in the next two years I don't see why they would have a problem with it.

Of course I live in a country where retirement savings isn't your employer's responsibility. I think the US has some ridiculous retirement practices that may make older employees a bit of a hot potato situation?

yunwal|about 1 month ago
I’m quite good at technical interviews, and I still think they’re not a good way to find the best person for the job in 95% of places they’re used
sfn42|about 1 month ago
I'm not really commenting on that, I'm saying the practice is good for me as an interviewee.

However I do think it's a good way to filter candidates. I should clarify that what I'm talking about is fairly basic programming tasks, not very hard leet code style DSA type tasks. I've never been given an actually hard task in an interview, they've all been fairly simple tasks like write a bracket tax calculator, write a class that stores car objects and can get them by plate number and stuff like that. Helped a friend do a take-home one where we fetched some data from spacex's api and displayed it in a html table.

Every time I do these, people act like I'm Jesus for solving a relatively simple task. Meanwhile I'm just shocked that this is something my peers struggle with. I would have honestly expected any decent dev to be able to do these with roughly the same proficiency as myself, but it turns out almost nobody can.

That's why I think it's a good way to test candidates. If you're going to work as a programmer you should be able to solve these types of tasks. I don't care if you're frontend, backend, finance, healthcare, data science, whatever kind of programming you normally do, you should be able to do these kinds of things.

If someone can't then by my judgement they don't really know programming. They may have figured out some way to get things done anyway but I bet the quality of their work reflects their lack of understanding. I've seen a lot of code written by this kind of people, it's very clear that a lot of developers really don't understand the code they're writing. It's honestly shocking how bad most "professional software developers" are at writing simple code.

wongarsu|about 1 month ago
In theory you could limit the scope of the union to not include things like negotiating salary or defending workers from being fired. I don't think anything prevents you from having a union that just fights for basic rights like good chairs, not having to review AI slop and not being exposed to asbestos.

Of course keeping the union narrowly focused is an issue. Unions are a democracy after all

eru|about 1 month ago
> Of course keeping the union narrowly focused is an issue. Unions are a democracy after all

Yep, and I don't want my neighbours to vote on the colour of my underwear or what I have for breakfast either. They can mind their business, and I can mind mine.

wizzwizz4|about 1 month ago
Look into SAG-AFTRA.
eru|about 1 month ago
As long as you don't pretend to talk for people who don't want to be talked for, go ahead and knock yourself out.
immibis|about 1 month ago
Are AI slop reviews threatening to your life?
charlieyu1|about 1 month ago
Yes, the code is so bad if I let them pass it could put the entire company out of business
zwnow|about 1 month ago
AI generated code is threatening the whole tech industry while also threatening to hurt tons of users, because people that have no business in building and deploying apps suddenly feel like they can. That Tea app was a good example for that, endangering thousands of women by leaking private conversations and address data.

If AI slop infiltrates projects enterprises are built upon, its likely companies and their customers are metaphorically hurt too, because of a spike in outages etc... (which already happens given AWS got like 7000 outage reports after getting rid of another 14000 employees).

Yes AI can be cool, but can we stop being this blind regarding its limitations, usecases, how its actually used, how it actually benefits humanity, and so on? Like give me a valid reason for Sora existing (except for monetizing attentionspans of humans, which I consider highly unethical).

kanwisher|about 1 month ago
Funny the app that was made to destroy other peoples lives with anonymous tips that could be fake, hurt the real perpetrators. Almost like it was karma
bmn__|about 1 month ago
Reply intended to user zwnow who is banned by HN, so I cannot reply directly.

You confuse intent with reality. The social software under discussion was abused immediately for the criminal purpose of spreading falsehoods about men, both with malicious intent and wilful negligence, which is particularly egregious because the victims were not made aware of the slander. Even if they wanted to defend themselves, they were prevented from doing so because of the institutionalised sexism, men are banned from participating on grounds of their sex alone. The proof for this is in the leaks. You failed to take this into account and hence got downvoted into oblivion, not for the reason you claim.

The other facts you write about are part of a different narrative, they are not directly relevant to kanwisher's proposition.

IMO, we should not have any tolerance for platforms that are designed for gossip because of the boy-cries-wolf effect in backlash because it means if a woman is a genuine victim, people will take the priors into account and most will assume she's a liar, too, and this lets the perpetrators off the hook. I do not want to live in such a society. The way out of this is holding women accountable, they should be punished for criminal behaviour with immediate and drastic consequences, and tenfold so for their enablers. The problem would stop overnight.

zwnow|about 1 month ago
Hold women accountable for men being a general threat to them? Sure its their fault \s
bmn__|about 1 month ago
That's not what I wrote. You know that, I know that you know, and you know that I know.

If you can't have a conversation with a modicum of respect, then GTFO HN. We don't need pages filled with pretence and stupid arguments that go nowhere and change no one's mind.

zwnow|about 1 month ago
Acting like HN was a good platform... Also you argued like an incel so its hard to take you serious
bmn__|23 days ago
immibis|about 1 month ago
Okay, can you avoid comparing a company going bankrupt because of a bad bet on AI, to a person getting mangled and crushed into a cube inside of an industrial machine?
zwnow|about 1 month ago
Where did I compare that? Getting hurt has variations. Your privacy can be hurt, your physique can be hurt, your emotions can be hurt.
rTX5CMRXIfFG|about 1 month ago
No. Programmer unions are going to shrink the economy and make the current job market a permanent trajectory instead of a cyclical one.

I can’t think of why the idea of unions is gaining popularity in some programmer circles, other than that its advocates simply don’t have economic common sense.

gregoryl|about 1 month ago
> Programmer unions are going to shrink the economy and make the current job market a permanent trajectory instead of a cyclical one

How?

nextlevelwizard|about 1 month ago
When you are applying from a job you are more desirable and you aren't desperate so you can take your pick. If your current job is bad then you can't really lose much.

Otherwise you need to be the person at the company who cuts through the bullshit and saves it from when the VibeCodeTechDebt is popping the industry.

zwnow|about 1 month ago
The market only sucks for devs that lack experience or have a skillset thats oversaturated. If you only know React and Python I'm sorry, but there are like 20 million devs just like you so the one thats willing to work for the smallest coin is going to win.
jonchurch_|about 1 month ago
We are seeing a lot more drive by PRs in well known open source projects lately. Here is how I responded to a 1k line PR most recently before closing and locking. For context, it was (IMO) a well intentioned PR. It purported to implement a grab bag of perf improvements, caching of various code paths, and a clustering feature

Edit: left out that the user got flamed by non contributors for their apparently AI generated PR and description (rude), in defense of which they did say they were using several AI tools to drive the work. :

We have a performance working group which is the venue for discussing perf based work. Some of your ideas have come up in that venue, please go make issues there to discuss your ideas

my 2 cents on AI output: these tools are very useful, please wield them in such a way that it respects the time of the human who will be reading your output. This is the longest PR description I have ever read and it does not sound like a human wrote it, nor does it sound like a PR description. The PR also does multiple unrelated things in a single 1k line changeset, which is a nonstarter without prior discussion.

I don't doubt your intention is pure, ty for wanting to contribute.

There are norms in open source which are hard to learn from the outside, idk how to fix that, but your efforts here deviate far enough from them in what I assume is naivety that it looks like spam.

jonchurch_|about 1 month ago
Daniel Stenberg of curl gave a talk about some of what theyve been experiencing, mostly on the security beg bounty side. A bit hyperbolic, and his opinion is clear from the title, but I think a lot of maintainers feel similarly.

“AI Slop attacks on the curl project” https://youtu.be/6n2eDcRjSsk

lkramer|about 1 month ago
I think it's only fair to give an example where he feels AI is used correctly: https://mastodon.social/@bagder/115241241075258997
jonchurch_|about 1 month ago
Wow very cool, theyve now closed 150 bugs identified via ai assistance/static analysis!

For ref, here is the post from Joshua Rogers about their investigation into the tooling landscape which yielded those findings

https://joshua.hu/llm-engineer-review-sast-security-ai-tools...

yodsanklai|about 1 month ago
You review it like it wasn't AI generated. That is: ask author to split it in reviewable blocks. Or if you don't have an obligation to review it, you leave it there.
resonious|about 1 month ago
This is it. The fact that the PR was vibe coded isn't the problem, and doesn't need to influence the way you handle it.
gdulli|about 1 month ago
It would be willfully ignorant to pretend that there's not an explosion of a novel and specific kind of stupidity, and to not handle it with due specificity.
rablackburn|about 1 month ago
> It would be willfully ignorant to pretend that there's not an explosion of a novel and specific kind of stupidity

I 100% know what you mean, and largely agree, but you should check out the guidelines, specifically:

> Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.

And like, the problem _is_ *bad*. A fun, on-going issue at work is trying to coordinate with a QA team who believe chatgpt can write css selectors for HTML elements that are not yet written.

That same QA team deeply care about the spirit of their work, and are motivated by, the _very_ relatable sentiment of, you DONT FUCKING BREAK USER SPACE.

Yeah, in the unbridled, chaotic, raging plasma that is our zeitgeist at the moment, I'm lucky enough to have people dedicating a significant portion of their life to trying to do quality assurance in the idiomatic, industry best-standard way. Blame the FUD, not my team.

I would put to you that the observation that they do not (yet) grok what, for lack of a more specific universally understood term we are calling, "AI" (or LLMs if you are Fancy. But of course none of these labels are quite right). People need time to observe, and learn. And people are busy with /* gestures around vaguely at everything /*.

So yes, we should acknowledge that long-winded trash PRs from AI are a new emergent problem, and yes, if we study the specific problem more closely we will almost certainly find ever more optimal approaches.

Writing off the issue as "stupidity" is mean. In both senses.

watwut|about 1 month ago
I do not think that is being curmudgeonly. Instead, OP is absolutely right.

We collectively used the strategy of "we pretend we are naively stupid and dont talk directly about issues" in multiple areas ... and it failed every single time in all of them. It never solves the problem, it just invites to bad/lazy/whatever actors to play semantic manipulative games.

WalterSear|about 1 month ago
I contend that, by far and away the biggest difference between entirely human-generated slop and AI-assisted stupidity is the irrational reaction that some people have to AI-assisted stuff.
JoshTriplett|about 1 month ago
Many of the people who submit 9000-line AI-generated PRs today would, for the most part, not have submitted PRs at all before, or would not have made something that passes CI, or would not have built something that looks sufficiently plausible to make people spend time reviewing it.
HelloNurse|about 1 month ago
Most of those people should still keep their ignorance to themselves, without bothering actual programmers, like they did before LLM hype convinced them that "sufficiently plausible" is good enough.

A similar trend: the popularity of electric scooters among youngsters who would otherwise walk, use public transport, or use decent vehicles increases accidents in cities.

JoshTriplett|about 1 month ago
I think my comment may have been misparsed. I was observing that one of the problems with LLMs is making it possible for people to produce 9000-line PRs they don't understand where previously they might have been gated by making something even remotely plausible that compiles or passes CI.
WalterSear|about 1 month ago
9000-line PRs were never a good idea, have only been sufficiently plausible because we were forced to accept bad PR review practices. Coding was expensive and management beat us into LGTMing them into the codebase to keep the features churning.

Those days are gone. Coding is cheap. The same LLMs that enable people to submit 9000 line PRs of chaos can be used to quickly turn them into more sensible work. If they genuinely can't do a better job, rejecting the PR is still the right response. Just push back.

hatefulmoron|about 1 month ago
Calling things "slop" is just begging the question. The real differentiating factor is that, in the past, "human-generated slop" at least took effort to produce. Perhaps, in the process of producing it, the human notices what's happening and reconsiders (or even better, improves it such that it's no longer "slop".) Claude has no such inhibitions. So, when you look at a big bunch of code that you haven't read yet, are you more or less confident when you find out an LLM wrote it?
fragmede|about 1 month ago
If you try and one shot it, sure, but if you question Claude, point out the errors of its ways, tell it to refactor and ultrathink, point out that two things have similar functionality and could be merged. It can write unhinged code with duplicate unused variable definitions that don't work, and it'll fix it up if you call it out, or you can just do it yourself. (cue questions of if, in that case, it would just be faster to do it yourself.)
hatefulmoron|about 1 month ago
I have a Claude max subscription. When I think of bad Claude code, I'm not thinking about unused variable definitions. I'm thinking about the times you turn on ultrathink, allow it to access tools and negotiate it's solution, and it still churns out an over complicated yet partially correct solution that breaks. I totally trust Claude to fix linting errors.
fragmede|about 1 month ago
It's hard to really discuss in the abstract though. Why was the generared code overly complicated? (I mean, I believe you when you say it was, but it doesn't leave much room for discussion). Similarly, what's partially correct about it? How many additional prompts does it take before you a) use it as a starting point b) use it because it works c) don't use any of it, just throw it away d) post about why it was lousy to all of the Internet reachable from your local ASN.
hatefulmoron|about 1 month ago
I've read your questions a few times and I'm a bit perplexed. What kind of answers are you expecting me to give you here? Surely if you use Claude Code or other tools you'd know that the answers are so varying and situation specific it's not really possible for me to give you solid answers.
fragmede|about 1 month ago
However much you're comfortable sharing! Obviously ideal would be the full source for the "overly complicated" solution, but naturally that's a no go, so even just more words than a two word phrase "overly complicated". Was it complicated because it used 17 classes with no inheritance and 5 would have done it? Was it overly complicated because it didn't use functions and so has the same logic implemented in 5 different places?

I'm not asking you, generically, about what bad code do LLMs produce. It sounds like you used Claude Code in a specific situation and found the generated code lacking. I'm not questioning that it happened to you, I'm curious in what ways it was bad for your specific situation more specifically than "overly complicated". How was it overly complicated?

Even if you can't answer that, maybe you could help me reword the phrasing of my original comment so it's less perplexing?

WalterSear|about 1 month ago
If you are getting garbage out, you are asking it for too much at once. Don't ask for solutions - ask for implementations.
hatefulmoron|about 1 month ago
Distinction without a difference. I'm talking about its output being insufficient, whatever word you want to use for output.
WalterSear|about 1 month ago
And I'm arguing that if the output wasn't sufficient, neither was your input.

You could also be asking for too much in one go, though that's becoming less and less of a problem as LLMs improve.

hatefulmoron|about 1 month ago
You're proposing a truism: if you don't get a good result, it's either because your query is bad or because the LLM isn't good enough to provide a good result.

Yes, that is how this works. I'm talking about the case where you're providing a good query and getting poor results. Claiming that this can be solved by more LLM conversations and ultrathink is cope.

WalterSear|about 1 month ago
I've claimed neither. I actually prefer restarting or rolling back quickly rather than trying to re-work suboptimal outputs - less chance of being rabbit holed. Just add what I've learned to the original ticket/prompt.

'Git gud' isn't much of a truism.

WalterSear|about 1 month ago
I have pretty much the same amount of confidence when I receive AI generated or non-AI generated code to review: my confidence is based on the person guiding the LLM, and their ability to that.

Much more so than before, I'll comfortably reject a PR that is hard to follow, for any reason, including size. IMHO, the biggest change that LLMs have brought to the table is that clean code and refactoring are no longer expensive, and should no longer be bargained for, neglected or given the lip service that they have received throughout most of my career. Test suites and documentation, too.

(Given the nature of working with LLMs, I also suspect that clean, idiomatic code is more important than ever, since LLMs have presumably been trained on that, but this is just a personal superstition, that is probably increasingly false, but also feels harmless)

The only time I think it is appropriate to land a large amount of code at once is if it is a single act of entirely brain dead refactoring, doing nothing new, such as renaming a single variable across an entire codebase, or moving/breaking/consolidating a single module or file. And there better be tests. Otherwise, get an LLM to break things up and make things easier for me to understand, for crying out loud: there are precious few reasons left not to make reviewing PRs as easy as possible.

So, I posit that the emotional reaction from certain audiences is still the largest, most exhausting difference.

grey-area|about 1 month ago
clean code and refactoring are no longer expensive

Are you contending that LLMs produce clean code?

WalterSear|about 1 month ago
They do, for many people. Perhaps you need to change your approach.
grey-area|about 1 month ago
The code I've seen generated by others has been pretty terrible in aggregate, particularly over time as the lack of understanding and coherent thought starts to show. Quite happy without it thanks, haven't seen it adding value yet.
Jeremy1026|about 1 month ago
Or is the bad code you've seen generated by others pretty terrible, but the good code you've seen generated by others blends in as human-written?

My last major PR included a bunch of tests written completely by AI with some minor tweaking by hand, and my MR was praised with, "love this approach to testing."

dmurray|about 1 month ago
If you can produce a clean design, the LLM can write the code.
WalterSear|about 1 month ago
I think maybe there's another step too - breaking the design up into small enough peices that the LLM can follow it, and you can understand the output.
TexanFeller|about 1 month ago
So do all the hard work yourself and let the AI do some of the typing, that you’ll have to spend extra time reviewing closely in case its RNG factor made it change an important detail. And with all the extra up front design, planning, instructions, and context you need to provide to the LLM I’m not sure I’m saving on typing. A lot of people recommend going meta and having LLMs generate a good prompt and sequence of steps to follow, but I’ve only seen that kinda sorta work for the most trivial tasks.
fragmede|about 1 month ago
Unless you're doing something fabulously unique (at which point I'm jealous you get to work on such a thing), they're pretty good at cribbing the design of things if it's something that's been well documented online (canonically, a CRUD SaaS app, with minor UI modification to support your chosen niche).
WalterSear|about 1 month ago
And if you are doing something fabulously unique, the LLM can still write all the code around it, likely help with many of the components, give you at least a first pass at tests, and enable rapid, meaningful refactors after each feature PR.
hatefulmoron|about 1 month ago
I don't really understand your point. It reads like you're saying "I like good code, it doesn't matter if it comes from a person or an LLM. If a person is good at using an LLM, it's fine." Sure, but the problem people have with LLMs is their _propensity_ to create slop in comparison to humans. Dismissing other people's observations as purely an emotional reaction just makes it seem like you haven't carefully thought about other people's experiences.
WalterSear|about 1 month ago
My point is that, if I can do it right, others can too. If someone's LLM is outputing slop, they are obviously doing something different: I'm using the same LLMs.

All the LLM hate here isn't observation, it's sour grapes. Complaining about slop and poor code quality outputs is confessing that you haven't taken the time to understand what is reasonable to ask for, aren't educating your junior engineers how to interact with LLMs.

lukan|about 1 month ago
"My point is that, if I can do it right, others can too."

Can it also be, that different people work in different areas and LLM's are not equally good in all areas?

WalterSear|about 1 month ago
That was my first assumption, quite a while ago now.
rockskon|about 1 month ago
???

People complaining about receiving bad code is, by definition, observation.

HelloNurse|about 1 month ago
> Perhaps, in the process of producing it, the human notices what's happening and reconsiders (or even better, improves it such that it's no longer "slop".)

Given the same ridiculously large and complex change, if it is handwritten only a seriously insensitive and arrogant crackpot could, knowing what's inside, submit it with any expectation that you accept it without a long and painful process instead of improving it to the best of their ability; on the other hand using LLM assistance even a mildly incompetent but valuable colleague or contributor, someone you care about, might underestimate the complexity and cost of what they didn't actually write and believe that there is nothing to improve.

exe34|about 1 month ago
Are you quite sure that's the only difference you can think of? Let me give you a hint: is there any difference in the volume for the same cost at all?
f311a|about 1 month ago
It's the problem. I often have to guide LLMs 2-4 times to properly write 150-300 LOC changes because I see how the code can be simplified or improved.

There is no way that 9000 lines of code are decent. It's also very hard to review them and find bad spots. Why spent your time in the first place? It probably took one hour for a person to generate it, but it will take ten to review and point out hundreds (probably) problems.

Without AI, no one would submit 9000 lines, because that's tens of hours of work which you usually split into logical parts.

cespare|about 1 month ago
It is 1995. You get an unsolicited email with a dubious business offer. Upon reflection, you decide it's not worth consideration and delete it. No need to wonder how it was sent to you; that doesn't need to influence the way you handle it.

No. We need spam filters for this stuff. If it isn't obvious to you yet, it will be soon. (Or else you're one of the spammers.)

resonious|about 1 month ago
The original ask was about one PR.
baq|about 1 month ago
Didn’t even hit the barn, sorry. Codegen tools were obvious, review assistance tools are very lagging, but will come.
eru|about 1 month ago
We already have some of them. And if you have a wide enough definition, we had them for a while.
lm28469|about 1 month ago
It 100% is.

Why would I bother reviewing code you didn't write and most likely didn't read ?

ericmcer|about 1 month ago
It is a huge problem. PR reviews are a big deal, not just for code reasons, but they are one of the best teaching tools for new hires. Good ones take time and mental energy.

Asking me to review a shitty PR that you don't understand is just disrespectful. Not only is it a huge waste of everyones time, you're forcing me to do your work for you (understanding and validating the AI solution) and you aren't learning anything because it isn't your work.

gpm|about 1 month ago
Eh, ask the author to split it in reviewable blocks if you think there's a chance you actually want a version of the code. More likely if it's introducing tons of complexity to a conceptually simple service you just outright reject it on that basis.

Possibly you reject it with "this seems more suitable for a fork than a contribution to the existing project". After all there's probably at least some reason they want all that complexity and you don't.

userbinator|about 1 month ago
If you try to inspect and question such code, you will usually quickly run into that realisation that the "author" has basically no idea what the code even does.

"review it like it wasn't AI generated" only applies if you can't tell, which wouldn't be relevant to the original question that assumes it was instantly recognisable as AI slop.

If you use AI and I can't tell you did, then you're using it effectively.

ahtihn|about 1 month ago
If it's objectively bad code, it should be easy enough to point out specifics.

After pointing out 2-3 things, you can just say that the quality seems too low and to come back once it meets standards. Which can include PR size for good measure.

If the author can't explain what the code does, make an explicit standard that PR authors must be able to explain their code.

charlieyu1|about 1 month ago
You are optimistic like the author even cared about the code. Most of the time you get another LLM response on why the code “works”
danenania|about 1 month ago
I’m curious how people would suggest dealing with large self-contained features that can’t be merged to main until they are production-ready, and therefore might become quite large prior to a PR.

While it would be nice to ship this kind of thing in smaller iterative units, that doesn’t always make sense from a product perspective. Sometimes version 0 has bunch of requirements that are non-negotiable and simply need a lot of code to implement. Do you just ask for periodic reviews of the branch along the way?

arachnid92|about 1 month ago
The way we do it where I work (large company in the cloud/cybersecurity/cdn space):

- Chains of manageable, self-contained PRs each implementing a limited scope of functionality. “Manageable” in this context means at most a handful of commits, and probably no more than a few hundred lines of code (probably less than a hundred tbh).

- The main branch holds the latest version of the code, but that doesn’t mean it’s deployed to production as-is. Releases are regularly cut from stable points of this branch.

- The full “product” or feature is disabled by a false-by-default flag until it’s ready for production.

- Enablement in production is performed in small batches, rolling back to disabled if anything breaks.

Yizahi|about 1 month ago
In our case, if such a thing happens (a few times per year across hundreds of people), a separate branch is created and a team working on that feature is completely autonomous for a while, while there is constant normal work in trunk by everyone else. Team tests their feature and adjacent code to an acceptable beta state but doesn't do any extensive or full coverage because it is impossible. Their code may be reviewed at that point if they request it, but it done as an extra activity, with meetings and stuff. Then they optionally give this build to the general QA to run full suite on it. This may be done in several cycles if fatal issues are found. Then they announce that they will do merge into trunk on days A to B and ask everyone to please hold off on committing into trunk in that time. Around that time they send a mail outlining changes and new functionality and potential or actual unfixed issues. QA teams runs as full cover of tests as possible. Merge may be reverted at this point if it is truly bad. Or if it good, team announces success and proceeds with normal work mode.
wiseowise|about 1 month ago
> I’m curious how people would suggest dealing with large self-contained features that can’t be merged to main until they are production-ready

Are you hiding them from CIA or Al-Qaeda?

Feature toggles, or just plain Boolean flag are not rocket science.

danenania|about 1 month ago
Not rocket science, but I think there are also some tradeoffs with feature flags?

People could build on top of half-baked stuff because it’s in main. Or you might interact with main in ways that aren’t ready for production and aren’t trivial to toggle… or you just forget a flag check somewhere important.

I could also see schema/type decisions getting locked in too early while the feature is still in flux, and then people don’t want to change after it’s already reviewed since it seems like thrashing.

But yeah, definitely it’s one option. How do you consider those tradeoffs?

JonChesterfield|about 1 month ago
They come from people who have established that their work is worth the time to review and that they'll have put it together competently.

If it's a newcomer to the project, a large self contained review is more likely to contain malware than benefits. View with suspicion.

foxglacier|about 1 month ago
The partial implementation could be turned off with a feature flag until it's complete.
exe34|about 1 month ago
you line up 10-20 PRs and merge them in a temporary integration branch that gets tested/demoed. The PRs still have to be reviewed/accepted and merged into main separately. You can say 'the purpose of this pr is to do x for blah, see top level ticket'. often there will be more than one ticket based on how self-contained the PRs are.
ericmcer|about 1 month ago
I will schedule review time with coworkers I trust to go over it with them.

It is about ownership to me. I own my PRs. If I throw garbage out and expect you to fix it I am making you own my PRs. No one wants to be forced to own other peoples work.

ashdksnndck|about 1 month ago
If you ask them to break it into blocks, are they not going to submit 10 more AI-generated PRs (each having its own paragraphs of description and comment spam), which you then have to wade through. Why sink even more time into it?
Buttons840|about 1 month ago
Being AI-generated is not the problem. Being AI-generated and not understandable is the problem. If they find a way to make the AI-generated code understandable, mission accomplished.
ashdksnndck|about 1 month ago
How much of their time should open source maintainers sink into this didactic exercise? Maybe someone should vibe-code a bot to manage the process automatically.
data-ottawa|about 1 month ago
I think breaking a big PR up like this is usually fair

Sometimes I get really into a problem and just build. It results in very large PRs.

Marking the PR as a draft epic then breaking it down into a sequence smaller PRs makes it much easier to review. But you can solicit big picture critique there.

I’m also a huge fan of documentation, so each PR needs to be clear, describe the bigger picture, and link back to your epic.

mrweasel|about 1 month ago
There's probably also a decent chance that the author can't actually do it.

Let's say it's the 9000 lines of code. I'm also not reviewing 900 lines, so it would need to be more than 10 PRs. The code needs to be broken down into useful components, that requires the author to think about design. In this case you'd probably have the DSL parser as a few PRs. If you do it like that it's easier for the reviewer to ask "Why are you doing a DSL?" I feel like in this case the author would struggle to justify the choice and be forced to reconsider their design.

It's not just chopping the existing 9000 lines into X number of bits. It's submitting PRs that makes sense as standalone patches. Submitting 9000 lines in one go tells me that you're a very junior developer and that you need guidance in terms of design and processes.

For open source I think it's fine to simply close the PR without any review and say: Break this down, if you want me to look at it. Then if a smaller PR comes in, it's easier to assess if you even want the code. But if you're the kind of person that don't think twice about submitting 9000 lines of code, I don't think you're capable of breaking down you patch into sensible sub-components.

ashdksnndck|about 1 month ago
Some of the current AI coding tools can follow instruction like “break this PR up into smaller chunks”, so even a completely clueless user may be able to follow those instructions. But that doesn’t mean it’s worth a maintainer’s time to read the output of that.
latexr|about 1 month ago
> Or if you don't have an obligation to review it, you leave it there.

Don’t just leave it there, that reflects badly on you and your project and pushes away good contributors. If the PR is inadequate, close it.

ivanjermakov|about 1 month ago
My record is 45 comments on a single review. Merge conditions were configured so that every comment must be resolved.

If PR author can satisfy it - I'm fine with it.

cryptonym|about 1 month ago
They will let AI somewhat satisfying it and ask you for further review
ivanjermakov|about 1 month ago
Reminds me of curl problems with vulnerability report: https://news.ycombinator.com/item?id=43907376

At that point it is just malicious.

cryptonym|about 1 month ago
Some people genuinely believe agentic coding works great and they mastered it. Someone who PR a simple feature with its own DSL probably is on that team and won't see the issue with their way. They may think you are too old and resist AI. They probably would tell you if that's too much for your old fashioned coding skills, then just use an agent for the PR.

If you think that way, who cares about the code and additional DSL? If there is an issue or evolution required, we'll let AI work on it. If it works, just let it merge. Much cheaper than human reviewing everything.

I hate it, maybe I'm too old.

EagnaIonat|about 1 month ago
Everyone talking about having them break it down into smaller chunk. Vibe coding there is a near guarantee the person doesn't know what the code does either.

That alone should be the reason to block it. But LLM generated code is not protected by law, and by extension you can damage your code base.

My company does not allow LLM generated code into anything that is their IP. Generic stuff outside of IP is fine, but every piece has to flagged that it is created by an LLM.

In short, these are just the next evolution of low quality PRs.

smsm42|about 1 month ago
> Vibe coding there is a near guarantee the person doesn't know what the code does either.

Accepting code into the project when only one person (the author) knows what it does is a very bad idea. That's why reviews exist. Accepting code that zero persons know what it does is sheer screaming insanity.

Cthulhu_|about 1 month ago
Unless it's not important. I think vibe coding is fine for self-hosted weekend projects / hackathons / POCs and only if there's no intersection with legal stuff (like PII or payment processing).

But for any open source or enterprise project? Hell no.

smsm42|about 1 month ago
If you don't ever need to know or maintain the code, sure, it's not your code, you don't own it, in fact the code is disposable. For something like POC, where I don't care how it's done, I just want to see if it can be done - I've done it myself. Then if real code is needed, you throw the disposable one out or rewrite it completely. That's fine. But if it's a long term project, somebody needs to own it.
exe34|about 1 month ago
> Everyone talking about having them break it down into smaller chunk. Vibe coding there is a near guarantee the person doesn't know what the code does either.

that's the point though, if they can't do it, then you close the ticket and tell them to fork off.

EagnaIonat|about 1 month ago
I agree, but you are potentially opening yourself up to 20+ PRs which are all vibe coded.
scott_w|about 1 month ago
Copy and paste is your friend here. If there's 20+ huge PRs, just paste "This PR is far too large to review, please break it down and submit smaller PRs and engage with us ahead of time to understand how to solve this problem."

Comment & Close PR, only engage in discussions on tickets or smaller, understandable PRs.

As other have said: if someone drive-by opens a huge PR, it's as likely to be malware as a beneficial implementation.

exe34|about 1 month ago
You can read one or two and decide if it's worth going through. Otherwise you can just reject them.
jeroenhd|about 1 month ago
> Vibe coding there is a near guarantee the person doesn't know what the code does either.

Having spent some time vibe coding over the weekend to try it out, I disagree. I understand every line of code the super-specific Android app I generated does, even if I don't have the Android dev experience to come up with the code from the top of my head. Laziness is as good a reason to vibe code as inexperience or incompetence.

I wouldn't throw LLM code at a project like this, though, especially not in a PR of this size.

MikeNotThePope|about 1 month ago
How about this?

“This PR is really long and I’m having a hard time finding the energy to review it all. My brains gets full before I get to the end. Does it need to be this long?”

Force them to make a case for it. Then see how they respond. I’d say good answers could include:

- “I really trieeld to make it smaller, but I couldn’t think of a way, here’s why…”

- “Now that I think about it, 95% of this code could be pushed into a separate library.”

- “To be honest, I vibe coded this and I don’t understand all of it. When I try to make it smaller, I can’t find a way. Can we go through it together?”

grodriguez100|about 1 month ago
Don’t. I would refuse to review a PR with 9000 LOC and 63 new files even if written by a human. Something that large needs to be discussed first to agree on an architecture and general approach, then split in manageable pieces and merged piece-wise in a feature branch, with each individual PR having reasonable test coverage, and finally the feature branch merged into master.
TriangleEdge|about 1 month ago
Amazon eng did some research and found the number of comments in a code review is proportional to the number of lines changed. Huge CRs get little comments. Small CRs get a lot of comments. At Amazon, it's common to have a 150 to 300 line limit to changes. It depends on the team.

In your case, I'd just reject it and ensure repo merges require your approval.

kwk1|about 1 month ago
"Inversely proportional" for what it's worth
senderista|about 1 month ago
Also, some teams have CR metrics that can be referenced for performance evaluations.
ekjhgkejhgk|about 1 month ago
Could you please provide a reference? I couldn't find it.
zukzuk|about 1 month ago
That’s a great way to discourage anyone ever doing any large scale refactoring, or any other heavy lifting.
febusravenga|about 1 month ago
That's good. Because large refactorings are usually harmful. They are also usually unplanned, not scoped and based on very unquantifiable observations like "I don't like the code is structured" - let's do ity way.
Cthulhu_|about 1 month ago
That's a good thing, large scale refactorings should be very, very rare. Even automated code style changes can be controversial because of the churn they create. For large and/or important software, churn should be left to a minimum, even at the cost of readability or code cleanliness. I've seen enough open source projects that simply state they won't accept refactoring / reformatting PRs.
gempir|about 1 month ago
That means your code will stay old.

A new language feature is released, you cannot apply it to old code, since that would make a big PR. You need to do super slowly over time and most old code will never see it.

A better static type checker, that finds some bugs for you, you cannot fix them as your PR would be too big, you instead would need to make a baseline and split it up endlessly.

In theory yes, maybe a bit safer to do it this way, but discouraging developers to make changes is bad IMO. Obviously depends on your usecase, if you develop software that is critical to people's literal life, then you'll move more carefully.

But I wager 99% of the software the world produces is some commerce software, where the only thing lost is money.

wizzwizz4|about 1 month ago
> A new language feature is released, you cannot apply it to old code, since that would make a big PR.

Good. Don't change code for the sake of shiny new things syndrome.

> A better static type checker, that finds some bugs for you, you cannot fix them as your PR would be too big,

Good. Report each bug separately, with a suggested fix, categorised by region of the code. Just because you ran the program, that doesn't mean you understand the code well enough to actually fix stuff: those bugs may be symptomatic of a deeper issue with the module they're part of. The last thing you need is to turn accidentally-correct code into subtly-wrong code.

If you do understand the code well enough, what's the harm in submitting each bugfix as a separate (independent) commit? It makes it easier for the reviewers to go "yup, yup, yup", rather than having to think "does this part affect that part?".

ThiefMaster-|about 1 month ago
Large-scale refactoring is not something you want from an external contributed, especially not if unsolicited.

Typically such refactoring is done by the core development team / maintainers, who are very familiar with the codebase. Also because DOING such a change is much easier than REVIEWING it if done by someone else.

TriangleEdge|about 1 month ago
The review bots can be bypassed.
charlieyu1|about 1 month ago
You want to do large scale refactoring without the main team agreeing? Seems like a disaster.
arachnid92|about 1 month ago
Just split up your work across multiple PRs.
onion2k|about 1 month ago
How would you go about reviewing a PR like this?

AI is red herring in discussions like this. How the change was authored makes no difference here.

I wouldn't. I'd reject it. I'd reject it even if the author had lovingly crafted each line by hand. A change request is not "someone must check my work". It's a collaboration between an author and a reviewer. If the author is failing to bother respecting the reviewer's time then they don't deserve to get a review.

alexdowad|about 1 month ago
Be tactful and kind, but straightforward about what you can't/don't want to spend time reviewing.

"Thanks for the effort, but my time and energy is limited and I can't practically review this much code, so I'm closing this PR. We are interested in performance improvements, so you are welcome to pick out your #1 best idea for performance improvement, discuss it with the maintainers via ..., and then (possibly) open a focused PR which implements that improvement only."

ivanjermakov|about 1 month ago
Depends on context of course, but in my book "my time and energy is limited" is not a valid reason for a reject. Get back once you have time, review in chunks.
alexdowad|about 1 month ago
ivanjermakov, I don't know if you are an open source maintainer or not (I am, for several projects). If you are, and you follow the policy that "I will never reject PRs because of having no time, I will always get to it eventually", then I salute you. That is a self-sacrificing, altruistic position to take. It's also a very difficult position to maintain for the long term. If you can do it: congratulations!

As for me, my position is: "My project is my house. You want to be a guest in my house, you follow my rules. I really like people and am usually happy to answer questions from people who are reasonably polite, to review and provide feedback on their PRs, and so on. But I won't be pressured to prioritize your GitHub issue or PR over my work, my family, my friends, my health, or my personal goals in life. If you try to force me, I'll block you and there will be no further interaction."

If you don't like that position, well, I understand your feelings.

ivanjermakov|about 1 month ago
I'm absolutely with you on that. I'm not saying that every contribution deserves equal attention and that rejecting contributions is a bad/impolite thing.

There has to be a better reason than "your PR is too big" as it's likely just a symptom, also very much context sensitive. If it is a 5kLOC PR that adds a compiler backend for a new architecture then it probably deserves attention because of its significance.

But if it's obviously low quality code than my response would be that it is low quality code. Long story short, it's you (submitter) problem, not me (reviewer, BDFL) problem.

wiseowise|about 1 month ago
> is not a valid reason for a reject

As a reviewer or as a submitter?

JohnFen|about 1 month ago
I'd just reject it for being ridiculous. It didn't pass the first step of the review process: the sniff test.
brudgers|about 1 month ago
Charitably, even though it is not what you or I would do, the pull request could be a best good faith effort of a real human being.

So to me, it's less about being ridiculous (and "ridiculous" is a fighting word) and more a simple "that's not how this team does things because we don't have the resources to work that way."

Mildly hurt feelings in the most likely worst case (no food for a viral overtop tweet). At best recruitment of someone with cultural fit.

JohnFen|about 1 month ago
My objection to a PR like this has nothing to do with whether or not a human wrote it. It's that the PR is too large and complex. The reason I'd give for rejecting it would be that. I wouldn't say "it's ridiculous" as the reason. I would 100% be thinking that, though.
brudgers|about 1 month ago
That’s good.

My experience is “too large/complex” provides an opening for arguementivenes and/or drama.

“We don’t do it like this” does not so much. It is social, sufficient and not a matter of opinion (“too” is a matter of opinion).

BrenBarn|about 1 month ago
What about "this is large and complex enough to be not the way we do things"?
f311a|about 1 month ago
It's very rare to see 9K PRs by humans. They usually take weeks of work if it's a serious change. No one wants to spend such an amount of time just to get rejected. You split it and send one by one while discussing changes.

But if it takes 1 hour with AI, you just send it right away...

CharlieDigital|about 1 month ago
Ask the submitter to review and leave their comments first or do a peer code review with them and force them to read the code. It's probably the first time they'll have read the code as well...
groguzt|about 1 month ago
I really like this, the fact that vibe coded PRs are often bad is that people don't review it themselves first, they just look at the form, and if it looks vaguely similar to what they had in their mind, they'll just hit save and not ask the LLM for corrections
rhubarbtree|about 1 month ago
In our company, you would immediately reject the PR based on size. There are a bunch of other quick bounce items it could also fail on, eg documentation.

The PR would then be split into small ones up to 400 lines long.

In truth, such a big PR is an indicator that either (a) the original code is a complete mess and needs reengineering or more likely (b) the PR is vibe coded and is making lots of very poor engineering decisions and goes in the bin.

We don’t use AI agents for coding. They’re not ready. Autocomplete is fine. Agents don’t reason like engineers, they make crap PRs.

ethin|about 1 month ago
This! And vibe-coded solutions usually will implement stuff your project already has in it, instead of reusing what already exists. Or Claude will massively over-engineer something that could be collapsed to 30 lines or something.
Cthulhu_|about 1 month ago
Can confirm, I asked it to write a unit test for a small component, it wrote tons of code that parsed and validated some JSON configuration (a theme file) instead of just the unit under test. I just wrote it myself instead of try and argue with it.

Maybe the AI was too clever for its own good? Have AI coding assistants evolved from junior (naive but alright) to medior (overly complicated and complete)?

andreygrehov|about 1 month ago
That 10+ years old joke never gets old:

10 lines of code = 10 issues.

500 lines of code = "looks fine."

Code reviews.

whynotmaybe|about 1 month ago
I recently reached another milestone.

+153675, -87954 : I don't care. Just taking time to read it will take longer than bondo fix the related bugs.

dosinga|about 1 month ago
Ideally you have a document in place saying this is how we handle vibe coding, something like: if you have the AI write the first version, it is your responsibility to make it reviewable.

The you can say (and this is hard), this looks like it is vibe code and misses that first human pass we want to see in these situations (link), please review and afterwards feel free to (re)submit.

In my experience they'll go away. Or they come back with something that isn't cleaned up and you point out just one thing. Or sometimes! they actually come back with the right thing.

rvrs|about 1 month ago
Enforce stacked PRs, reject PRs over 500-1k LoC (I'd argue even lower, but it's a hard sell)
arianjm|about 1 month ago
It always depends on your position and their position, but by the sounds of it... I'd say it's too big for a simple "review this".

I'd ask for them to write their thought process, why they made the decisions they made, what the need for so many files and so many changes. I may ask for a videoconference to understand better, if it's a collegue from work.

By now hopefully you should know if their approach is valid or not really. If not sure yet, then I'd take a look at the code, specially at the parts they refer to most importantly in their answer to my previous questions. So not a detailed review, a more general approach, to decide if this is valid or not.

If it's a valid approach, then I guess I'd review it. If not, then give feedback as to how to make it valid, and why it isn't.

Not valid is very subjective. From "this is just garbage", to "this is a good approach, but we can implement this iteratively in separate PRs that will make my life easier", again, it depends on your and their position.

raincole|about 1 month ago
You ask questions. Literally anything, like asking them why they believe this feature is needed, what their code does, why they made a DSL parser, etc.

The question itself doesn't matter. Just ask something. If their answer is genuine and making sense you deal with it like a normal PR. If their answer is LLM-generated too then block.

collingreen|about 1 month ago
My gut says close it.

My more professional side says invite the person to review it together - I do this for big or confusing PRs regardless of AI and it is both helpful and a natural backpressure to big PRs.

My tactical side says invite the person to show you their ai process because wow that's a lot of code that's super cool if it's good enough and then see if the AI can turn the PR into small, coherent, atomic chunks (rewritten with some arch learned from the existing project) and leave the person with those prompts and workflows.

My manager side already is very explicit with the team that code review is the bottleneck AND the code both working and being easy to understand is the authors responsibility, which makes these conversations much much easier.

Cthulhu_|about 1 month ago
I wouldn't, they can start by writing requirements and a design first, then break it up into manageable components.

Or just refuse to review and let the author take full responsibility in running and maintaining the thing, if that's possible. A PR is asking someone else to share responsibility in the thing.

bluerooibos|about 1 month ago
I wouldn't review it - bad engineering practice to submit this much work in one go - it puts too much expectation on the reviewer and makes it more likely that something gets broken.

Even 1000 lines is pushing it, IMO. Tell them to split the PR up into more granular work if they want it merged.

LaFolle|about 1 month ago
There are good suggestions in the thread.

One suggestion that possibly is not covered is that you/we can document clearly how AI generated PRs will be handled, make it easy for contributors to discover it and if/when such PR shows up refer the documented section to save yourself time.

siwatanejo|about 1 month ago
Forget about code for a second. This all depends a lot of what goal does the PR achieve? Does it align with the goals of the project?
appreciatorBus|about 1 month ago
How can you tell if it aligns with the goals of the project without reviewing 9000 lines of code first?
ivanjermakov|about 1 month ago
PRs rarely exist in a vacuum. Usually there is a ticket/issue/context which required a code change.
Cthulhu_|about 1 month ago
Read the title and description of the PR first. If that fails the sniff test, the code would as well.
siwatanejo|about 1 month ago
Are you kidding me? You should be able to explain from the user PoV what does the PR achieve, a new feature? a bugfix?

That data point is waaaaaay more important than any other when considering if you should think about reviewing it or not.

wiseowise|about 1 month ago
Okay, it does align. What next?
siwatanejo|about 1 month ago
Ok great. Now that it aligns, how valuable it is? Is it a deadly & terrific feature? Then maybe you should review it. It's just fixing a nitpick? Then ask the contributor to find a less verbose change.
zigcBenx|about 1 month ago
In my opinion no PR should have so much changes. It's impossible to review such things.

The only exception is some large migration or version upgrade that required lots of files to change.

As far it goes for Vibe coded gigantic PRs It's a straight reject from me.

devrundown|about 1 month ago
9000 LOC is way too long for a pull request unless there is some very special circumstance.

I would ask them to break it up into smaller chunks.

smsm42|about 1 month ago
The only way such a PR can be reviewed is if it's accompanied with a detailed PRD and tech design documents, and at least half of that LOC count is tests. Even then it requires a lot of interactive work from both sides. I have seen PRs third or quarter of this size that took weeks to properly review and bring to production quality. Unless there's something artificially inflating the side of it (like auto-generated files or massive test fixtures, etc.) I wouldn't ever commit to reviewing such a behemoth without a very very good reason to.
le-mark|about 1 month ago
How long was this person working on it? Six months? Anything this big should’ve had some sort of design review. The worst is some junior going off and coding some garbage no one sees for a month.
jonchurch_|about 1 month ago
You can churn this stuff out in about an hour these days though, seriously. Thats part of the problem, the asymmetry of time to create vs time to review.

If I can write 8 9k line PRs everyday and open them against open source projects, even closing them let alone engaging with them in good faith is an incredible time drain vs the time investment to create them.

O-stevns|about 1 month ago
That's a lot of code for a PR, though i should admit I have made PR's being half that size myself.

Personally I think it's difficult to address these kinds of PR's but I also think that git is terrible at providing solutions to this problem.

The concept of stacked PR's are fine up to the point where you need to make changes throughout all yours branches, then it becomes a mess. If you (like me) might have a tendency to rewrite your solution several times before ending up with the final result, then having to split this into several PR's does not help anyone. The first PR will likely be outdated the moment I begin working on the next.

Open source is also more difficult in this case because contrary to working for a company with a schedule, deadlines etc... you can't (well you shouldn't) rush a review when it's on your own time. As such PR's can sit for weeks or months without being addressed. When you eventually need to reply to comments about how, why etc.. you have forgotten most of it and needs to read the code yourself to re-claim the reasoning. At that time it might be easier to re-read a 9000 lines PR over time rather than reading 5-10 PR's with maybe meaningful descriptions and outcome where the implementation changes every time.

Also, if it's from a new contributor, I wouldn't accept such a PR, vibe coded or not.

fhd2|about 1 month ago
I'd say you have three options:

1. Reject it on the grounds of being too large to meaningfully review. Whether they used AI or not, this is effectively asking them to start over in an iterative process where you review every version of the thing and get to keep complexity in check. You'll need the right power and/or standing for this to be a reasonable option. At many organisations, you'd get into trouble for it as "blocking progress". If the people that pay you don't value reliability or maintainability, and you couldn't convince them that they should, that's a tough one, but it is how it is.

2. Actually review it in good faith: Takes a ton of time for large, over engineered changes, but as the reviewer, it is usually your job to understand the code and take on responsibility for it. You could propose to help out by addressing any issues you find yourself rather than making them do it, they might like that. This feels like a compromise, but you could still be seen as the person "blocking progress", despite, from my perspective, biting the bullet here.

3. Accept it without understanding it. For this you could _test_ it and give feedback on the behaviour, but you'd ignore the architecture, maintainability etc. You could still collaboratively improve it after it goes live. I've seen this happen to big (non-AI generated) PRs a lot. It's not always a bad thing. It might not be good code, but it could well be good business regardless.

Now, however you resolve it, it seems like this won't be the last time you'll struggle to work with that person. Can, and do they want to, change? Do you want to change? If you can't answer either of these questions with a yes, you'll probably want to look for ways of not working with them going forward.

rurban|30 days ago
I just came back from installing a vibe coded new service and frontend. It had most of the new required features the old python single file service didn't have. A new big shiny react typescript monster. Good UI.

The client would have loved be use it, as it was much easier to use. But in the end it was premature, not tested, and not adjustable to the client needs on-site. Too many states, even a global postgresql store. Super fragile. So I had to sidestep the new shiny Claude generated react code, after 2 people tried to fix it for 3 weeks, implemented the basic new features on the old system in a day, and this works stable now. No global state, just a single job file you copy there manually.

Enforce good SW practises. Test that it works. Have a backup solution ready if it doesn't work. Have simulators to test real behaviors. My simulators saved my day.

johnnyanmac|about 1 month ago
excuse me, 9000? If that isn't mostly codegen, including some new plugin/API, or a fresh repository I'd reject it outright. LLM's or not.

In my eyes, there really shouldn't be more than 2-3 "full" files worth of LOC for any given PR (which should aim to to address 1 task/bug each. If not, maybe 2-3 at most), and general wisdom is to aim to keep "full" files around 600 LOC each (For legacy code, this is obviously very flexible, if not infeasible. But it's a nice ideal to keep in mind).

An 1800-2000 LOC PR is already pushing what I'd want to review, but I've reviewed a few like that when laying scaffolding for a new feature. Most PR's are usually a few dozen lines in 4-5 files each, so it's far below that.

9000 just raises so many red flags. Do they know what problem they are solving? Can they explain their solution approach? Give general architectual structure to their implementation? And all that is before asking the actual PR concerns of performance, halo effects, stakeholders, etc.

throwaway290|about 1 month ago
Don't accept this PR. If it's bot generated you are not here to review it. They can find a bot to review bot generated requests.
renewiltord|about 1 month ago
It's basic engineering principle: you do not do work amplification. e.g. debouncing, request coalescing, back-pressure are all techniques to prevent user from making server do lots of work in response to small user effort.

As example, you have made summarization app. User is try to upload 1 TB file. What you do? Reject request.

You have made summarization app. User is try upload 1 byte file 1000 times. What you do? Reject request.

However, this is for accidental or misconfigured user. What if you have malicious user? There are many technique for this as well: hell-ban, tarpit, limp.

For hell-ban simply do not handle request. It appear to be handled but is not.

For tarpit, raise request maker difficulty. e.g. put Claude Code with Github MCP on case, give broad instructions to be very specific and request concise code and split etc. etc. then put subsequent PRs also into CC with Github MCP.

For limp, provide comment slow using machine.

Assuming you're not working with such person. If working with such person, email boss and request they be fired. For good of org, you must kill the demon.

dbgrman|about 1 month ago
TBH, depends on what is being reviewed. Is it a prototype that might not see light of day and is only for proof-of-concept? Did an RFC doc precede it and reviewers are already familiar with the project? Were the authors expecting this PR? Was there a conversation before the PR was sent out? Was there any effort to have a conversation after the PR was shared? Was this even meant to be merged into main?

I'll just assume good intent first of all. Second, 9000 LOC spanning 63 lines is not necessarily an AI generated code. It could be a code mod. It could be a prolific coder. It could be a lot of codegen'd code.

Finally, the fact that someone is sending you 9000 LOC code hints that they find this OK, and this is an opportunity to align on your values. If you find it hard to review, tell them that I find it hard to review, I can't follow the narrative, its too risky, etc. etc.

Code review is almost ALWAYS an opportunity to have a conversation.

flambojones|29 days ago
Sounds like an opportunity for the person who sent it to learn to improve their prompts. I've set my default claude rules to emphasize doing small-contained commits and use a stacking tool like Graphite to help them get reviewed.
hsbauauvhabzb|about 1 month ago
“Hey chatgpt, reject this pr for me. Be extremely verbose about the following topics:

- Large prs - vibe coding - development quality”

wiseowise|about 1 month ago
Finally, an advice from 10x AI engineer.
ivankahl|about 1 month ago
What are your organization's expectations or policies regarding PR size and acceptable AI usage? Even if your organization hasn't set any expectations, what are yours—and have you communicated them to the author?

If expectations have been shared and these changes contradict them, you can quickly close the PR, explain why it's not acceptable, and ask them to redo it.

If you don't have clear guidelines on AI usage or haven't shared your expectations, you'll need to review the PR more carefully. First, verify whether your assumption that it’s a simple service is accurate (although from your description, it sounds like it is). If it is, talk to the author and point out that it's more complicated than necessary. You can also ask if they used AI and warn them about the complexities it can introduce.

fathermarz|about 1 month ago
Let me ask a different question. Large refactor that ended up in a 60K line python PR because the new lead didn’t feel like merging it in until it was basically done. Even ask other devs to merge into his branch and then we would merge later.

How does one handle that with tact and not lose their minds?

JonChesterfield|about 1 month ago
Refuse to merge into their branch. If you have serious test coverage and the refactor doesn't change behaviour, it'll be fine.

If you don't have test coverage, or if the "refactor" is also changing behaviour, that project is probably dead. Make sure there's a copy of the codebase from before the new lead joined so there's a damage mitigation roll back option available.

wiseowise|about 1 month ago
You get Leetcode subscription and start going through paths for a company that can match or exceed your salary.
ilc|about 1 month ago
The same way I would with a human:

If I thought the service should only be 1000 lines tops:

- Reject due to excess complexity.

If it is a proper solution:

- Use AI to review it, asking it to be VERY critical of the code, and look for spots where human review may be needed, architecture wise, design wise and implementation wise.

- Ask the AI again to do a security review etc.

- Tell the author to break the PR down into human size chunks using git.

Why those things? It's likely some manager is gonna tell me review it anyways. And if so, I want to have a head start, and if there's critical shoot down level issues I can find with an AI quickly. I'd just shut the PR down now.

As in any "security" situation, in this case the security of your codebase and sanity, defense in depth is the answer.

anarticle|about 1 month ago
No face, no case. They have to break it way down, just like at any org. In fact, I would ask for more tests than usual with a test plan/proof they passed. 9k is a little spicy, separate PRs, or an ad hoc huddle with them rubber ducking you through the code. Depends on if you care about this that much or not.

Unless you really trust them, it's up to the contributor to make their reasoning work for the target. Else, they are free to fork it if it's open source :).

I am a believer in using llm codegen as a ride along expert, but it definitely triggers my desire to over test software. I treat most codegen as the most junior coder had written it, and set up guardrails against as many things llm and I can come up with.

fancyfredbot|about 1 month ago
If it shouldn't be 90k LOC, and it doesn't need a DSL parser then reject it as overcomplicated and unmaintainable. Make it clear how large and complex you expect it to be and where existing code or framework should be reused so they can go away and address your concerns and so that there's a high chance you'll be able to approve if they do.

Above all, you aim to allow the contributor to be productive, you make it clear what constraints they need to operate under in order to use AI codegen effectively. You want to come across as trying to help them and need to take care not to appear obstructive or dismissive.

throwaway106382|about 1 month ago
You don't.

Was your project asking for all this? No? Reject.

EdwardDiego|about 1 month ago
I wrote a lot of comments - for humans, but then I also specifically addressed some to the bot - "Cursor, remove all emojis in log messages, and do not use print for debugging, use a logger, where you are using a logger you are repeatedly importing the logging library in various conditional clauses, you should always import at the top level of the file" etc. etc. etc. - because you know that they're going to feed my review back to the bot.

The fact that someone submitted this PR in that state though...

jasonjmcghee|about 1 month ago
That's unreasonably large. Depending on the content, PRs tend to get harder and harder to read with every line of code.

1k added lines is imo already pushing it.

9k and 63 files is astronomical and very difficult to review.

A proper review means being able to understand the system and what's being changed, how, and why in order to be able to judge if it was done properly and includes everything it should and nothing it shouldn't.

9k lines is just too much to be able to do this properly.

tayo42|about 1 month ago
You can't really review this. Rubber stamp it or reject it.
lionkor|about 1 month ago
Close them and report to your boss. If your boss doesn't care, look for a new job. Once you have a new job, quit the old and cite that specific case as the reason.
cyrusradfar|about 1 month ago
Trigger: Shameful Self Promotion

I created a tool in VSCode for this called Intraview. It allows you to create a dynamic code tour to provide feedback.

It works with your existing agent and creates a sharable tour that you can navigate and provide feedback step by step.

Rationally, this is much easier than reviewing the diff, because you can prompt to break up the PR logically so you can approve in functional pieces.

T_Potato|about 1 month ago
I have a tangent question: How do you deal with a team that spends days nitpicking implementation, double-speak and saying. I didn't actually expect you to implement this the way I said, I was just saying it would be nice if it was like this, can you undo it. I spend 3 weeks on a code review because of the constant back and forth; and I wish oh I wish they would allow PR to be small but the rule is that the PR has to implement the full deliverable feature. And that can mean 20 files to constantly change and change and change and change. Oh and then the why did you use Lombok question that occurs even though the project uses lombok and so you are stuck defending the use of a library that's is used in the project for no random reason than to flatter the egos of the gatekeepers who say, yes this is good but I want you to name this abc instead of ab before we merge. When in context it doesn't add or remove any value, not even clarity.
tjansen|about 1 month ago
Generally, my stance is that I add more value by doing whatever ridiculous thing people ask me to change than waste my time arguing about it. There are some obvious exceptions, like when the suggestions don't work or make the codebase significantly worse. But other than that, I do whatever people suggest, to save my time, their time, and deliver faster. And often, once you're done with their initial suggestions, people just approve.

This doesn't help all the time. There are those people who still keep finding things they want you to change a week after they first reviewed the code. I try to avoid including them in the code review. The alternative is to talk to your manager about making some rules, like giving reviewers only a day or two to review new code. It's easy to argue for that because those late comments really hinder productivity.

dbetteridge|about 1 month ago
Doesn't help you much I imagine, but the one time we had a dev like this he was fired after multiple complaints to the team lead.
reactordev|about 1 month ago
Easy, auto reject and close it. If asked why, state that each feature should be its own PR. Don’t waste any more brain cells on it.

If an engineer really cared, they would discuss these changes with you. Each new feature would be added incrementally and ensuring that it doesn’t break the rest of the system. This will allow you to understand their end goal while giving them an avenue to achieve it without disrupting your end goal.

javier_e06|about 1 month ago
I would request in the PR references to the unit test with 100% coverage. Once I run it and if it passes I would do a spot check and look for glaring errors. Nothing deep. Perhaps I would run lint or some static analysis tool on the code. If the analysis tools come out squeaky clean and the unit test passes? Well, what's not to like? One or more problems? Reject the whole thing.
thinkingtoilet|about 1 month ago
The problem is you can get a very large, messy, and inconsistent code base that eventually will slow things down. Even if tests pass, the code can be a detriment in the long run.
data-ottawa|about 1 month ago
I don’t like the 100% test coverage approach.

The PR will be huge, plus AI is great at adding tons of shallow tests.

I see tests as little pins that hold your codebase down. They can be great for overall stability, but too many and your project becomes inflexible and brittle.

In this case you’d be nailing a bunch of code that you don’t want to the code base.

phendrenad2|about 1 month ago
Are they truly vibe-coded? Or is the person simply accomplishing months of work in one day? Do you think the submitter reviewed it themselves? There's a difference you know. Like it or not, AI coding is not going away.

In your case, 9000 LOC and 63 files isn't that crazy for a DSL. Does the DSL serve a purpose? Or is it just someone's feature fever dream to put your project on their resume?

fifilura|about 1 month ago
Is it Java/Spring? Then probably go along and be happy that a human didn't have to write those 9000 lines for a trivial service.
Cthulhu_|about 1 month ago
Man I'd hate it even more then. If I ever get back to Java I hope this nonsense has been resolved. I've been spoiled by Go where they're a lot more pragmatic about things. I want to add a brouhaha about code volume but it's compared with Java so never mind.
NumberCruncher|about 1 month ago
Why not fighting fire with fire and using AI to:

Version A: find 100 LOC which can be reduced to 50 LOC without changing the functionality. Then ask the author to go through the PR making sure it's not bloated. Repeat.

Version B: find hidden bugs. Ask the author to fix them. Repeat.

Keep them occupied saving your face. I would also fine tune an own agent to automatise this kind of work for me.

dzink|about 1 month ago
With AI code complexity is a cost bigger than money. Because it takes infinite amount of time from humans (maintainers, engineers) and requires increasing amount of memory and hardware to handle (unnecessarily) you have to account for it and hold contributors accountable for it. Otherwise any code will become unmanageable and un-runable and un-upgradable.
locknitpicker|about 1 month ago
> How would you go about reviewing a PR like this?

State the PR is too large to be reviewed, and ask the author to break it down into self-contained units.

Also, ask which functional requirements the PR is addressing.

Ask for a PR walkthrough meeting to have the PR author explain in detail to an audience what they did and what they hope to achieve.

Establish max diff size for PRs to avoid this mess.

thw_9a83c|about 1 month ago
> How to deal with long vibe-coded PRs?

This is partly a joke, but it works: Rewrite your project in an obscure, unpopular, uncool programming language, that LLMs cannot use to meaningfully write code. You will get zero vibe-coded PRs and you will remain in a full control over your source code.

jeremyjh|about 1 month ago
I'd just close it without comment. Or maybe if I'm feeling really generous I'll make a FAQ.md that gives a list of reasons why we'll close PRs without review or comment and link that in the close comments. I don't owe anyone any time on my open source projects. That said, I haven't had this issue yet.
tracerbulletx|about 1 month ago
That's fine for an open source project, but many many companies are mandating AI use, they're putting it in performance reviews, they're buying massive Cursor subscriptions. You'd be cast as an obstructionist to AI's god like velocity ™.
jeremyjh|about 1 month ago
Well in my case I'd just fire them if they opened a 9,000 LOC PR that they didn't understand.
tacostakohashi|about 1 month ago
Use AI to generate the review, obviously.
jake-coworker|about 1 month ago
I usually share this resource when people start doing this https://google.github.io/eng-practices/review/developer/smal...
claar|about 1 month ago
Apparently "CL" is "Change List", Google speak for a Pull Request, if anyone else was confused like I was.
wheelerwj|about 1 month ago
The same way you review a non vibe coded pr. Whats that got to do with anything? A shit pr is a shit pr.
self_awareness|about 1 month ago
Reject, with unnecessary technical debt reason. Most of the times custom DSLs are not needed.

The question is what was the original task that needed to be fixed? I doubt it required a custom DSL.

Issue a research task first to design the scope of the fix, what needs to be changed and how.

giantg2|about 1 month ago
Start with the test files. There's no way the AI had meaningful and working test cases. Pop a comment on each test file about missing tests or expanding them. That will force the dev to review their own code and make substantial changes.
ares623|about 1 month ago
Ask them if they reviewed the AI’s output before opening the PR. If they didn’t then ask them to at least review it first rather than having you do all the work. If they did then is a 2nd review from you really necessary? ;)
brutal_chaos_|about 1 month ago
Having experienced AI at $job and having tried to make vibecodong a thing, run when you see it. Yes, that means good enough AI gets through, what's the harm in that if it works as you need it to?
aaronrobinson|about 1 month ago
Reject it
abhimanyue1998|about 1 month ago
vibe review it with AI then run it on vibe production support. simple.
wengo314|about 1 month ago
reject outright. ask to split it into reasonable chain of changesets.
cat_plus_plus|about 1 month ago
Vibe review with all the reasons it should not be merged obviously.
pacifika|about 1 month ago
Roughly takes an hour to review 1000 loc. tell your manager to book you in for a day and a half on the review. Usually scheduling it in is a deterrent for a quick approval
ontouchstart|about 1 month ago
A more difficult question might be if it were merged now and 100 merges later you found a serious bug with a root cause in this PR, do you ask the same person to fix it?
aryehof|about 1 month ago
This is effectively a product, not a feature (or bug). Ask the submitter how you can you determine if this meets functional and non-functional requirements, to start with?
bmitc|about 1 month ago
Reject it and request the author makes it smaller.

PRs should be under 1000 lines.

The alternative is to sit down with them and ask what they're trying to accomplish and solve the problem from that angle.

zzzeek|about 1 month ago
It's garbage, reject it. Over engineered. Update your PR guidelines that AI is fine to help write code but PRs have to be ultimately human designed.
ojr|about 1 month ago
I would test if the new features work and if there is any regressions around critical business functions and merge it, if my manual tests pass.
fxtentacle|about 1 month ago
„I trust you that you have proof-read this“ and then just merge. When production explodes, their name will be all over „git blame“.
james_marks|about 1 month ago
“This is unnecessarily complex” and cite 1-2 egregious examples, with a LOC estimate that you think is more reasonable.

5 minutes, off the cuff.

999900000999|about 1 month ago
Reject it and tell them to actually code it.
dustingetz|about 1 month ago
zoom call

ask them to walk you through it

ask for design doc if appropriate

what is test plan who is responsible for prod delivery and support

(no difference from any other large pr)

ugh123|about 1 month ago
Are there tests written? You could start by demanding tests pass and demonstrate some kind of coverage metric.
dlisboa|about 1 month ago
Close them. It's not a PR in good faith. A pull-request is meant to be reviewable, 9k LOC is not.
ako|about 1 month ago
AI code generators are getting better fast, in the near future they will be able to produce good changes faster than you can review. How will you deal with it then? Most vibe coding tools can also produce smaller PR, but then you have to deal with 250+ PRs in 1 week. Is that more manageable? My guess is we need new tool, get the human out of the loop. More automated reviews, tests, etc.
ako|about 1 month ago
Instead of downvotes i would appreciate some insightful comments on this, as i'm currently struggling with this problem. In the last week i've vibe-code (vibe-engineered?) a typescript project with 230+ commits, 64 typescripts files, with 27k+ lines of code. Too much to actually read. Validation mostly through testing, automated test, architecture reviews (generate mermaid diagrams). I'm mostly reviewing the code structure and architecture, libraries it uses, etc. It has 600+ unit and integration tests, but even reviewing those is too much...
shinycode|about 1 month ago
Our problem is not coding. Our problem is knowledge. If no one reads it and no one knows how it works and that’s what the company wants because we need to ship fast then the company doesn’t understand what software is all about. Code is a language, we write stories that makes a lot of sense and has consequences. If the companies does not care that humans need to know and decide in details the story and how it’s written then let it accept the consequence of a sttastistically generated story with no human supervision. Let it trust the statistics when there will be a bug and no one knows how it works because no one read it and no one is there anymore to debug. We’ll see in the end if it’s cheaper to let the code be written and only understood by statistical algorithms. Otherwise, just work differently instead of generating thousand of loc, it’s your responsibility to review and understand no matter how long it takes.
smsm42|about 1 month ago
> In the last week i've vibe-code (vibe-engineered?) a typescript project with 230+ commits, 64 typescripts files, with 27k+ lines of code. Too much to actually read.

Congratulations, you discovered that generating code is only part of software development process. If you don't understand what the code is actually doing, good luck maintaining it. If it's never reviewed, how do you know these tests even test anything? Because they say "test passed"? I can write you a script that prints "test passed" a billion times - would you believe it is a billion unit tests? If you didn't review them, you don't have tests. You have a pile of code that looks like tests. And "it takes too long to review" is not an excuse - it's like saying "it's too hard to make a car, so I just took a cardboard box, wrote FERRARI on it and sit inside it making car noises". Fine, but it's not a car. It's just pretending. If it's not properly verified, what you have is not tests, it's just pretending.

ako|about 1 month ago
I’m well aware, thank you, have been coding for 40+ years (including 6502 and 68000 assembly), masters in computer science, have built healthcare software where bugs can lead to death. But with LLMs enabling us to generate source code faster, our review process is becoming an increasingly larger bottleneck for productivity. We need to start thinking how we can scale this process.
smsm42|about 1 month ago
It's as much bottleneck for productivity as cars being made of metal are bottleneck for speed. Sure, you can make a paper car. It probably would be faster. Until you collide with something and then you discover why the metal frame was a good idea. If you generate code that you can not verify or test, sure, it's faster. Until something goes wrong.
ako|27 days ago
That is why formula 1 race cars are mostly carbon fiber, including the frame. They identified the bottleneck, and replaced it with a better solution...
smsm42|25 days ago
Better solution for a race car, driven on specialized tracks, by highly seasoned professional, who is paid enormous money, in compensation for the grave risks he is taking. It's not a better solution for everyday car driven by everyday person. Most race cars aren't even street legal, and getting arrested for driving one on the street would be the best outcome. Getting seriously hurt would be most likely.
Pannoniae|about 1 month ago
Yeah, you aren't wrong.... I predict two things to happen with this.

1. A more biological approach to programming - instead of reviewing every line of code in a self-contained way, the system would be viewed in a more holistic way, observing its behaviour and test whether it works for the inputs you care about. If it does, great, ship it, if not, fix it. This includes a greater openness to just throwing it away or massively rewriting it instead of tinkering with it. The "small, self-contained PRs" culture worked well when coding was harder and humans needed to retain knowledge about all of the details. This leads to the next point, which is

2. Smaller teams and less fungibility-oriented practices. Most software engineering practices are basically centred around making the bus factor higher, speeding onboarding up and decrease the volatility in programmers' practices. With LLM-assisted programming, this changes quite a bit, a smaller, more skilled team can more easily match the output of a larger, more sluggish one, due to the reduced communication overhead and being able to skip all the practices which slow the development velocity down in favour of doing things. A day ago, the good old Arthur Whitney-style C programming was posted to this site (https://news.ycombinator.com/item?id=45800777) and most commenters were horrified. Yes, it's definitely a mouthful on first read but this style of programming does have value - it's easier to overview, easier to modify than a 10KLOC interpreter spanning 150 separate files, and it's also quite token-efficient too. Personally, I'd add some comments but I see why this style is this way.

Same with style guides and whatnot - the value of having a code style guide (beyond basic stuff like whitespace formatting or wordwrapping on 160) drastically drops when you do not have to ask people to maintain the same part for years. You see this discussion playing out, "my code formatter destroyed my code and it made much more unreadable" - "don't despair, it was for the greater good for the sake of codebase consistency!". Again, way less of a concern when you can just tell an LLM to reformat/rename/add comments if you want it.

I'd definitely say that getting the architecture right is way more important, and let the details play out in an organic way, unless you're talking about safety-critical software. LLM-written code is "eventually correct", and that is a huge paradigm shift from "I write code and I expect the computer to do what I have written".

sshine|about 1 month ago
Same standard as if they had made it themselves: a sequence of logically ordered commits.
ethin|about 1 month ago
If it's obviously AI generated and is an absurdly long PR, I'd ask them to extensively justify the complexity (especially if it does side quest-isms like this example where the AI created a DSL and stuff: why exactly is the DSL required?). If the project already implements the feature, I'd ask that they remove the re-implemented parts and use what already exists. If one of the dependencies of the project does this, I'd ask that they update the PR to use those instead of wholesale redoing it. If they respond, at all, with AI-generated responses instead of doing it themselves, or their PR description is AI generated, or it's blatantly obvious they used AI, I would immediately mentally classify the PR as an ultra low effort/quality PR until proven otherwise. Might seem harsh, but I prefer PRs from people who actually both understand the project and what the PR is trying to do. I don't mind if people use AI to assist in that understanding; I don't even mind if they use AI to help write parts of the PR. But if I can tell that it's AI generated (and completely re-implementing something that the project either has already or is in the stdlib or a dep is a very good sign of AI generated code in my experience), I'm far more inclined to dismiss it out of hand.
exe34|about 1 month ago
simple, ask them to break it down into smaller pieces with clear explanation of what it does and why it's needed. Then set up an AI to drag them in the dirt with pointless fixes. or just close them as won't-fix.
drbojingle|about 1 month ago
If they can vibe code it they can vibe disassemble it and vibe small PR it.
atoav|about 1 month ago
Tell them to give you a phone call and have them explain the code to you : )
alganet|about 1 month ago
"too big, please break it into smaller self-contained PRs"

[ Close with comment ]

calini|about 1 month ago
Vibe merge review it using Copilot or equivalent, and then close it :)
cryptonym|about 1 month ago
Prompt: be over cautious on every code line, this is junior code and they can learn a lot from this PR. Generate many comments on why it shouldn't be merged as-is and make sure every corner case is covered. Be super paranoid, mistakes in the code could hurt the company or people.

If you are lucky, they will also vibe fix it.

meltyness|about 1 month ago
Proof by counterexample, just find the inevitable security flaw.
Roark66|about 1 month ago
Many people gave good tips, so let me answer in general.

As someone on the "senior" side AI has been very helpful in speeding up my work. As I work with many languages, many projects I haven't touched in months and while my code is relatively simple the underlying architecture is rather complex. So where I do use AI my prompts are very detailed. Often I spot mistakes that get corrected etc. With this I still see a big speedup (at least 2x,often more). The quality is almost the same.

However, I noticed many "team leads" try to use the AI as an excuse to push too difficult tasks onto "junior" people. The situation described by the OP is what happens sometimes.

Then when I go to the person and ask for some weird thing they are doing I get "I don't know, copilot told me"...

Many times I tried to gently steer such AI users towards using it as a learning tool. "Ask it to explain to you things you don't understand" "Ask questions about why something is written this way" and so on. Not once I saw it used like this.

But this is not everyone. Some people have this skill which lets them get a lot more out of pair programming and AI. I had a couple trainees in the current team 2 years ago that were great at this. This way as "pre-AI" in this company, but when I was asked to help them they were asking various questions and 6 months later they were hired on permanent basis. Contrast this with: - "so how should I change this code"? - You give them a fragment, they go put it in verbatim and come back via teams with a screenshot of an error message...

Basically expecting you will do the task for them. Not a single question. No increased ability to do it on their own.

This is how they try to use AI as well. And it's a huge time waster.

randomNumber7|about 1 month ago
You can lead a horse to water, but you can’t make it drink.

Also people with that mentality had been a waste of time before AI too.

Lapsa|about 1 month ago
strict lines of code limitation enforcement will lead to half-finished change requests and leak technological gibberish upstream to lovely business folk
ninetyninenine|about 1 month ago
You vibe review it. I’m actually only half kidding here.
occz|about 1 month ago
Easy, you reject it.
pomarie|about 1 month ago
One thing that actually works is getting AI to review the basic stuff first so you can focus on architecture and design decisions. The irony of using AI to review AI-generated code isn't lost on me, but it does help.

That said, even with automated review, a 9000 line PR is still a hard reject. The real issue is that the submitter probably doesn't understand the code either. Ask them to walk you through it or break it down into smaller pieces. If they can't, that tells you everything.

The asymmetry is brutal though. Takes an hour to generate 9000 lines, takes days to review it properly. We need better tooling to handle this imbalance.

(Biased take: I'm building cubic.dev to help with this exact problem. Teams like n8n and Resend use it to catch issues automatically so reviewers can focus on what matters. But the human review is still essential.)

CamperBob2|about 1 month ago
Please review this PR. Look carefully for bugs, security issues, and logical conflicts with existing code. Report 'Pass' if the PR is of sufficient quality or 'Fail' if you find any serious issues. In the latter case, generate a detailed report to pass along to the submitter.

(ctrl-v)

ErroneousBosh|about 1 month ago
Instant reject, advising them not to resubmit.
dearilos|about 1 month ago
Put up guardrails to enforce quality code.
paul_h|about 1 month ago
Ask AI number 2 to summarize the intention (look at the .patch) to markdown. Reset. Ask you AI to read the intention as of the orig author had written it, and say you've grave doubts about the contrib's functionally and non-functionally and for it to help you put that into words to feel back to the contributor. Basically the playbook from https://paulhammant.com/images/SimonSinghsFermatExcerpt.jpg
nish__|about 1 month ago
Build it locally and QA test it.
bitbasher|about 1 month ago
"CoS" - Close on Sight
drfrank3|about 1 month ago
AI creates slop of dead or inefficient code that can be cleaned up. I think that developers that obsess over control have a difficult time adjusting to this.

The greater danger is that AI can create or modify code into something that is disconnected, stubbed, and/or deceptive and claim it’s complete. This is much worse because it wastes much more time, but AI can fix this too, just like it can the slop- maybe not deterministically, but it can.

And because of this, those that get in the way of creating source with AI are just cavemen rejecting fire.

vasan|about 1 month ago
Just reflect upon it, see if you gave him less time to complete it. I would just have a meet with him and confront it.
ZeroGravitas|about 1 month ago
How you reject the first one of these, compared with the hundretth and the millionth(!) is probably going to be an interesting development over next few years.

Personally, I've felt drained dealing with small PRs fixing actual bugs by enthusiastic students new to projects in the pre-slop era.

Particularly if I felt they were doing it more to say they'd done it, rather than to help the project.

I imagine that motive might help drive an increase in this kind of thing.

shinycode|about 1 month ago
Don’t read it, approve it.
ChrisMarshallNY|about 1 month ago
I write full app suites that have less than 9000 LoC. I tend toward fewer, large-ish source files, separated by functional domains.

I once had someone submit a patch (back in the SVN days), that was massive, and touched everything in my system. I applied it, and hundreds of bugs popped up.

I politely declined it, but the submitter got butthurt, anyway. He put a lot of work into it.

0x000xca0xfe|about 1 month ago
Fight slop with slop. Use an AI to review it in excruciating detail and write a lenghty justification for the rejection. Make sure to really hit a couple thousand words.

Maybe getting their own time wasted will teach the submitter about the value of clarity and how it feels to be on the receiving end of a communication with highly asymmetric effort.

chuckadams|about 1 month ago
Nice, but I suspect the PR submitter will simply reply with their own AI-generated counterpoints. They'll probably come to an agreement to turn both of you into paperclips.
0x000xca0xfe|about 1 month ago
If they resubmit you at least have proof that the conversation is completely braindead and the other side is happy to engage in uprofessional behavior and generally wasting time and resources.

This should be convincing enough even for a non-technical team lead while for the initial PR it might be hard to explain objectively why it's bad.

hshdhdhehd|about 1 month ago
With a middle finger
userbinator|about 1 month ago
If it's full of the typical vibe-coded nonsense that's easy to spot upon a quick-but-close inspection (unused functions, dead-end variables and paths that don't make sense, excessively verbose and inaccurate comments, etc.), I would immediately reject.
darepublic|29 days ago
> a DSL parser

oh no

PeterStuer|about 1 month ago
Before review ask for a rational and justification. Might be just overcomplicated AI slop, could also be someone actually went beyond the basics and realy produced something next level.

A simple email could tell the difference.

never_inline|about 1 month ago
close button.
Sirikon|about 1 month ago
Reject them
mort96|about 1 month ago
Close them.
irvingprime|about 1 month ago
Reject. Period. No compromise. No friendly comments about how it can be improved. Just reject it as unreviewable.

Then ban the idiot who submitted it.

mexicocitinluez|about 1 month ago
The same way you would do literally any other PR. I don't know why this is special.

If the code sucks, reject it. If it doesn't, accept it.

This isn't hard.

ripped_britches|about 1 month ago
Obviously by vibe reviewing it
HelloNurse|about 1 month ago
Complaining about inadequate tests and documentation should be a very efficient and effective strategy against slop.
wheelerwj|about 1 month ago
The same way you do a non vibe coded pr. If its a shit pr, its a shit pr.
est|about 1 month ago
write another AI to hardcore review it and eventually reject it.
foxfired|about 1 month ago
It's funny just today I published an article with the solution to this problem.

If they don't bother writing the code, why should you bother reading it? Use an LLM to review it, and eventually approve it. Then of course, wait for the customer to complain, and feed the complaint back to the LLM. /s

Large LLM generated PRs are not a solution. They just shift the problem to the next person in the chain.

throwawayffffas|about 1 month ago
How do you know they didn't bother to write it? For all we know the submitter has been quietly hammering away at this for months.
foxfired|about 1 month ago
The title says it is vibe-coded. By definition, it means they didn't write it.
throwawayffffas|about 1 month ago
But how do they know it's vibe-coded? It may have a smell to it. But the author might not know it for a fact. The fact it's vibe-coded is actually irrelevant the size of the request is the main issue.
foxfired|about 1 month ago
I'm not gonna make assumptions on behalf of OP, but if you have domain knowledge, you can quickly tell when a PR is vibe-coded. In a real world scenario, it would be pretty rare for someone to generate this much code in a single PR.

And if they did in fact spend 6 months painstakingly building it, it wouldn't hurt to break it down into multiple PRs. There is just so much room for error reviewing such a giant PR.

sunaookami|about 1 month ago
You can recognize it by the rocket emojis in the PR description ;)
wiseowise|about 1 month ago
Then it would have extensive vcs history. Unless they just amend into one humongous commit.
exclipy|about 1 month ago
I made a /split-commit prompt that automatically splits a megacommit into smaller commits. I've found this massively helpful for making more reviewable commits. You can either run this yourself or send this to your coworker to have them run it before asking you to re-review it.

Sometimes it doesn't split it among optimal boundaries, but it's usually good enough to help. There's probably room for improvement and extension (eg. re-splitting a branch containing many not-logical commits, moving changes between commits, merging commits, ...) – contributions welcome!

You can install it as a Claude Code plugin here: https://github.com/KevinWuWon/kww-claude-plugins (or just copy out the prompt from the repo into your agent of choice)