codeinabox

joined 7 months ago
MODERATOR OF
 

Over the past 16 months, DX has been running a longitudinal study on AI’s impact on engineering velocity across a sample of more than 400 engineering organizations. We found that as AI tool usage increased by an average of 65%, median PR throughput increased by just under 8%. Most organizations are landing in the 5–15% range—a meaningful gain, but far below the 3x or 10x expectations many leaders are being held to.

[–] codeinabox@programming.dev 4 points 2 months ago

There are some really good tips on delivery and best practice, in summary:

Speed comes from making the safe thing easy, not from being brave about doing dangerous things.

Fast teams have:

  • Feature flags so they can turn things off instantly
  • Monitoring that actually tells them when something’s wrong
  • Rollback procedures they’ve practiced
  • Small changes that are easy to understand when they break

Slow teams are stuck because every deploy feels risky. And it is risky, because they don’t have the safety nets.

[–] codeinabox@programming.dev 3 points 2 months ago

I think there's many solutions to this, including setting a minimum account age to accept pull requests from, or using Vouch.

[–] codeinabox@programming.dev 2 points 2 months ago (2 children)

Guys, can we add a rule that all posts that deal with using LLM bots to code must be marked? I am sick of this topic.

How would you like them to be marked? AFAIK Lemmy doesn't support post tags

[–] codeinabox@programming.dev 6 points 2 months ago (1 children)

What I'm saying is the post is broadly about programming, and how that has changed over the decades, so I posted it in the community I thought was most appropriate.

If you're arguing that articles posted in this community can't discuss AI and its impact on programming, then that's something you'll need to take up with the moderators.

[–] codeinabox@programming.dev 15 points 2 months ago (5 children)

In fact, this garbage blogspam should go on the AI coding community that was made specifically because the subscribers of the programming community didn't want it here.

This article may mention AI coding but I made a very considered decision to post it in here because the primary focus is the author's relationship to programming, and hence worth sharing with the wider programming community.

Considering how many people have voted this up, I would take that as a sign I posted it in the appropriate community. If you don't feel this post is appropriate in this community, I'm happy to discuss that.

[–] codeinabox@programming.dev 1 points 2 months ago

My nuanced reply was in response to the nuances of the parent comment. I thought we shared articles to discuss their content, not the grammar.

[–] codeinabox@programming.dev 6 points 2 months ago (5 children)

Regardless of what the author says about AI, they are bang on with this point:

You have the truth (your code), and then you have a human-written description of that truth (your docs). Every time you update the code, someone has to remember to update the description. They won't. Not because they're lazy, but because they're shipping features, fixing bugs, responding to incidents. Documentation updates don't page anyone at 3am.

A previous project I worked on we had a manually maintained Swagger document, which was the source of truth for the API, and kept in sync with the code. However no one kept it in sync, except for when I reminded them to do so.

Based on that and other past experiences, I think it's easier for the code to be the source of truth, and use that to generate your API documentation.

[–] codeinabox@programming.dev 6 points 2 months ago

There are plenty of humans using em dash, how do you think large language models learnt to use them in the first place? NPR even did an episode on it called Inside the unofficial movement to save the em dash — from A.I.

[–] codeinabox@programming.dev 9 points 2 months ago (12 children)

There is much debate about whether the use em-dash is a reliable signal for AI generated content.

It would be more effective to compare this post with the author's posts before gen AI, and see if there has been a change in writing style.

[–] codeinabox@programming.dev 115 points 2 months ago (44 children)

This quote on the abstraction tower really stood out for me:

I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.

They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.

But sure. AI is the moment they lost track of what’s happening.

The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.

[–] codeinabox@programming.dev 1 points 2 months ago (1 children)

Even if the bubble pops, the existing large language models will remain, as will AI assisted coding.

[–] codeinabox@programming.dev 19 points 2 months ago

Instead, most organisations don’t tackle technical debt until it causes an operational meltdown. At that point, they end up allocating 30–40% of their budget to massive emergency transformation programmes—double the recommended preventive investment.

I can very much relate to this statement. Many contracts I've worked on in the last few years, have been transformation programmes, where an existing product is rewritten and replatformed, often because of the level of tech debt in the legacy system.

view more: ‹ prev next ›