Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.
"Transmissionism" is a term I've seen to describe this
So cool. One reading is “complexity is not what you believe it is”. Another is “complexity is”… “not what you believe it is”. Seems similar but the difference is subtle. Even the “please try listening” line changes in both versions. One is confrontational, the other is empathetic.
This misses the basic problem of incentives. What "the company" wants doesn't matter, it's what the people making particular decisions want.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?”
> “What happens if we don’t do this?”
> “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
That doesn't sound as good in meetings. The person who can cut scope and get everyone to the "we did it" back patting phase makes everyone feel warm and cozy.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
This is where good leadership in the dev team is needed.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
For a large enough problem you need a combination of enough skill (to do the job), enough foresight (to know what likely will go wrong and how much error budget you need), and skin in the game (so you dont just cut things that sound good but instead what is truly needed) - if you don't have all three of these you usually are just talking out of your ass.
A sort of survivor bias. A VP ordered to use elastic search, because it worked well at his company before. Turned out it worked well for us. Listen to the VP to make technical decisions. And use elastic search.
Reminds me when the ELK stack was called just ELK (idek what it is now) we had a server we put it on, and after making the additional dashboards my manager wanted, we learned the limits of ES / ELK. It needs a ridiculous amount of memory, because it will shove everything in memory. Same thing when I learned that MongoDB indexing puts every item in memory as well, which is a yikes, why would you not want to index?
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
There's no high performance database that wont take all of your memory (at least for size of data) if you let it.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
If the data is < ram size and if you read that data again and its off disk again its the slowest it can possibly be, there's a reason most databases implement a buffer cache (actually making writes insanely faster as well) but yeah, MySQL is generally not a very good operational database with all the ones I have tinkered with.
I don't really remember, to be fair this was nearly 10 years ago now. Upon some googling now, I do see a way to limit just how much Mongo sucks up for data + index. I am curious if it would have been a smoother experience, if this configuration was even available then.
Agree. context matter. As a senior developer you need to understand complexity, risk, upsides and and downsides. Understand the business side.
If you are a startup or a big company that is already a cash cow makes a difference when changing a core featrue of the product etc... context context context
Innovation is change, and change is the opposite of stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
Yes, all stability in real life is metastability, it needs a constant effort to maintain. A worthy innovation can lower this effort, or lower the risk of a catastrophic failure.
What are we talking about? Philosophically yes. Factually, no. In the context of a system innovation could be switching from one form that renders in 1 second to another that renders in 50ms. Stability isn't part of that equation.
Is this switching risk-free? Consider all these ancient computer devices that run high-stakes equipment for years and decades without change. An RPi could replace an ancient PDP-11, cost a fraction, consume a fraction of energy, be faster, etc. But it also may introduce new and unknown failure modes.
I think it's possible that this idea would work as a communication/branding strategy for senior developers, though I don't think it's strictly true.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
Most proof of concepts I've seen get traction turned into production.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
Regarding the viability of rewrites of successful PoCs: Does the current environment change the math? How difficult would it be to overcome the inertia/hesitation/perception of slow, painful projects that may no longer be so?
I guess it's company culture? I had a job and we initially had quick solutions that went messy. We set a hard policy that every "quick and dirty" feature will have a follow up story that gets pulled into the following 1-2 sprints. Often it turned out that the feature didn't live up to expectations and we just disabled or deleted it, the other times we reviewed it and refactored it properly.
We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
This is why you need sufficiently senior engineering leadership (both IC leadership and management). If you have engineers who meekly do whatever a non-technical stakeholder asks then you have a vacuum of responsibility, and sooner or later things will blow up catastrophically and whoever was least adept at CYA will get blamed.
On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
Yea, I think even a lot of decent devs are afraid to just say "no" to things. They don't even bargain to find a balanced solution that can be reasonably done in terms of architecture and time to production.
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.
I work with a bunch of teams that complain that they're mired in tech debt all the time, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and none that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate how or even how much the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.
I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like their own work. And obviously the leadership team were pissed off about that, and now there's very little trust left.
There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
Because the goal isn't "keep this exact version of the app alive and running". The prototype is never the whole application. If your only metric is incidents, then yeah, don't ever touch the code again.
You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
This problem definitely predates AI coding agents, though it may be exacerbated by them. The article essentially concludes with the ancient advice of "plan to throw one away". Well sure, I also read Mythical Man Month, but how do I convince the decision-makers?
After my first proof of concept went into production by surprise, I stopped building proof of concepts and started building MVPs.
That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
This is my frustration at my current job. There's so much silliness and no one cares about avoiding it.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
> So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
Wish I had you at my first engineering job at IBM. A couple senior devs there (not all) would get pissed when juniors tried asking them questions. Not only did it take a bit of courage to ask someone who had been there 20 years about something, but it was a 50/50 chance they were going to be an asshole to ya lol. Was a good learning experience for me - I go out of my way to mentor now.
All the senior developers I have worked with are absolutely allergic to coming into the office, working closely with junior developers, and in general talking to people.
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
I took a job in another state in large part because one of the interviewers was a highly skilled sysadmin that I wanted to learn from (I had basically backed myself into system administration as a career at my first job, a startup, so I didn't have a lot of people to lean on to learn my trade).
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
Are juniors you ran into psychologically obsessed by being self-reliant ? or too proud of their own ideas ?
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
Exactly my experience. You describe it more diplomatically than I do hah.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
I like to play an online strategy game, openfront.io. The way to win is to take out someone who is gaining power before they get too powerful.
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
I don’t think it’s the arrogance of youth. It’s just that this generation and honestly a big cohort of millennials are not used to gleaning information from people. A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.
> A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
For all I know maybe you are an expert, but as a general rule of thumb - people are sick of "experts" eager to share their "expertise".
It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.
I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.
So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
Right, prioritization and transparency allow you to change the variables that people should be using to solve a problem (and if it doesn't they are not good at the job) - if you have two hours before a storm comes you will be asking "will it take on enough water that I cant bail it out?" instead of thinking about your architecture.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as the immediate first impression a junior gets looking at some arbitrary facet is always bad and too much and bad.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
TRADEOFFS! I think this is IT. Non programmers imagine there aren't tradeoffs. As a programmer one should eventually realise that every possible aspect of design is a tradeoff.
They all influence each other to one extent or another.
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
Building trust is yet another quality of a good senior. By that I don't mean to be buddy with the CEO but earning trust from everyone by making good decisions, arguments and delivering as promised. Even giving a jr a warning and let him fall flat is a good trust building exercise.
The best strategy is to frame your argument from the perspective of the customer:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
We may be looking at this differently based on our own experience fwiw. I also should have said added complexity or lack of (from poor planning).
That can work too, e.g. when demonstrating the pain a customer will experience when something complex is poorly designed (like some b2b workflow), but it's less visceral than telling your internal stakeholders all the extra work they'll have to do if it's rushed. Even the best of your peers are a bit selfish. The business side has a lot of incentives around quick turnarounds so it's easy to overlook the downside.
Imagine such a scenario. You're in healthcare and working on a feature that will add new data model for some kind of clinical information.
You could say:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Yeah that very well may prevent W from churning, though hopefully you think about how it will affect other clients too.
Or, you could say
"If we get this data model wrong, and the value set is ambiguous, you (product/sales/cs) will have to reach out to every single customer and clarify what they meant by x/y/z if we wish to migrate it with any degree of accuracy in the future."
That's drawn from experience but I'm sure there are a lot of parallels to that in other industries for any kind of data. Migrating data is a pain in the ass for everyone, but often it can be the people pushing for a quick solution that suffer the most when that goes wrong.
This kind is stuff is why commission structures should consider churn / residuals. Bad incentives make for hastily made decisions.
Even with AI, there is a clear difference between juniors and seniors.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
I may be missing something, but the "left" and "right" loops strike me as slightly different words for the same exact thing.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
The company is providing existing services to existing users for payment.
The company is offering potential new services to current and potential users in the market and getting feedback on how valuable those new services might be.
I tripped over the double-entendre of the teaser quote and then found it ironic that the author is a copy writer.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
I do a bit of both... I pay attention to new tools, libraries and languages. But will rarely recommend them initially. That said, I also tend to fight complexity to an extreme degree KISS/YAGNI are my top enterprise development keystones.
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
Unfortunately that's not the case. There are many senior and above level engineers out there who are unskilled communicators but very technically skilled.
Interesting article. I appreciate the range of perspectives here, and the overall pitch to keep the most experienced in frame along side new-fangled advancements (AI).
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
The polarization of speed vs scale concern on team is interesting.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
Speed… speed… velocity… speed. All I hear about these days. Every meeting.
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
This is well-put, but the problem comes when you’ve got leadership looking at what appears to be a fully-functioning version of the product that the market is clearly indicating to them is sufficient to drive revenue. Budgeting the 6 weeks or whatever to translate from “the working version” to “the trustworthy version” is a hard pitch.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
I can/have done this without AI and it tends to be disasterous. Management declares we need X fast. Okay, we can build that really fast, but it won't scale. Management says fine, just build it. We do. Management now wants to build Y fast. But wait, what about X? Nevermind, just build Y now. Okay, we're building Y, and X collapses... because it wasn't built to scale. Now we're being called in at 2 am to fix X while also expected to ship Y tomorrow. Sure, they'll glow you up and tell everyone what a hero you were for coming to the rescue at 2 am, but on that six month performance review, the blowup is used as reason to withhold raises and promotions. They don't lose any sleep of course, just you, the developer.
Irrespective of the linked post, let me say why I (being sort-of-a senior developer) fail to communicate my expertise. In no particular order:
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
Probably because unlike apprenticeships a senior developer isn’t an owner. This creates a situation where imparting knowledge means you have less time to do your own packed work stack.
Its all relative. There is no baseline for expertise in software. So, instead its whatever self-serving quality some sociopath on the other end favors.
The unspoken observation on the reason why this happens is it almost always political in the organization to make themselves more valuable and harder to fire / layoff.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
FTA: “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
Now they *think* they can create the apps themselves. I say let every CEO and business administrator try; business will fail, everything will get shitty, and eventually somebody somewhere might learn something. Let 'em cook.
> Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
> Ah, well, it can’t yet do the one thing senior developers still do. Take responsibility.
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
I’m inclined to take the author at their word that they’re a copywriter by trade.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
The written word is how people interact with LLMs. Clarity and precision in writing results in more effective prompting of LLMs. It is just as possible that leaning heavily AI writing will be seen as a marker of not being natively skilled enough at writing to prompt LLMs effectively because of the GIGO principle.
There's no fundamental reason that I have to read random blogposts from people I don't know. I do it today because I find it to be an enjoyable way to learn more about my profession and explore various perspectives on it. If I stop finding it enjoyable because too many people write their posts with AI, I'll stop reading these kind of blogs altogether, in the same way that I (and I suspect many commenters here) do not read even the most lovingly crafted Linkedin posts.
im either the biggest idiot in the world or this person is a terrible "copywriter". I found this post to be nearly unintelligible: "You can’t explain away someone else’s problem using your own problems." WTF does that mean? this would be a good place to put some very simplistic examples of what they mean, but they dont. is that because theyre trying to be succinct? clearly not as the post rambles on and on anyway. I hate posts that are both 1. not explaining their concept and 2. super long winded. That's a problem
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
"Transmissionism" is a term I've seen to describe this
https://andymatuschak.org/books/
complexity is
not what you believe it is
please try listening
Very cool
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
> why would you not want to index?
Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.
In the sense of the blog post: A senior with decent DB experience would have told you. ;)
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
The qualities were highlighted because they can all lead to better stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
A complete stability is death.
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
Old quote: "There is nothing so permanents as a temporary hack."
On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.
I work with a bunch of teams that complain that they're mired in tech debt all the time, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and none that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate how or even how much the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.
I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like their own work. And obviously the leadership team were pissed off about that, and now there's very little trust left.
There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
In a world where people (stakeholders, Product, and dev teams alike) want the prototype to be the full set of MVP features, this is not true.
That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
Maybe that’s just my workplace though!
And yes, the office is important.
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
Honestly I have the feeling that this is often insecurity. It's easy to feel uncomfortable if you think you don't follow along.
Another issue is that juniors usually experience culture shock on their first jobs. So they more or less isolate and do thing how they learned it.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.
I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.
So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
I think complexity is a byword for 'unintentionally complicated' here.
> "Software systems tend not to be complicated, not complex, until you start getting into distributed systems."
these days so much software is "distributed systems".
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
Building trust is yet another quality of a good senior. By that I don't mean to be buddy with the CEO but earning trust from everyone by making good decisions, arguments and delivering as promised. Even giving a jr a warning and let him fall flat is a good trust building exercise.
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
That can work too, e.g. when demonstrating the pain a customer will experience when something complex is poorly designed (like some b2b workflow), but it's less visceral than telling your internal stakeholders all the extra work they'll have to do if it's rushed. Even the best of your peers are a bit selfish. The business side has a lot of incentives around quick turnarounds so it's easy to overlook the downside.
Imagine such a scenario. You're in healthcare and working on a feature that will add new data model for some kind of clinical information.
You could say:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Yeah that very well may prevent W from churning, though hopefully you think about how it will affect other clients too.
Or, you could say
"If we get this data model wrong, and the value set is ambiguous, you (product/sales/cs) will have to reach out to every single customer and clarify what they meant by x/y/z if we wish to migrate it with any degree of accuracy in the future."
That's drawn from experience but I'm sure there are a lot of parallels to that in other industries for any kind of data. Migrating data is a pain in the ass for everyone, but often it can be the people pushing for a quick solution that suffer the most when that goes wrong.
This kind is stuff is why commission structures should consider churn / residuals. Bad incentives make for hastily made decisions.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
No-one says this.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
The company is offering potential new services to current and potential users in the market and getting feedback on how valuable those new services might be.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
There's ways to navigate it.
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
There will be different shades of usage and maybe we draw a line somewhere in there.
So even if AI was not used to write an article, it could "smell" like AI to someone who consumes less of it.
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).