Recent, ongoing observations of the past 12 years...
MangoCats
Our department is too small to really need an architect, plus there's a lot of egos running around who want to be an architect, I'm fine with the title Software Engineer.
Life is not safe. It's pretty relatively safe, as compared to living under a lean-to in the woods with bears and disease carrying insects and such, and even that's safer than people make it out to be. However, there are "dangerous things" out there, both in the woods and in the streets and in the organized crime / government. Are you more likely to die of a car crash, or a malicious action of some organized group, or a car crash arranged by some organized group? It's really hard to be sure, but if the available information is anything close to accurate, you're better off worrying about a car crash than a meteorite strike, or shark bite, or bad people targeting you.
If Germany had maintained (not expanded) their nuclear power generation capabilities, they could be burning zero coal right now.
I guess it's not puzzle solving for me anymore. The puzzle is solved in my head, the four hours of exploration are mostly occupied with looking up annoying details about how to do what's in my head within the syntax and limitations of the latest re-invention of whatever lanugage has been selected / is available in the environment. Both routes have to be tested, but glancing over provided source code and nodding "yeah, that looks right" then testing the successful implementation is a hell of a lot more satisfying for me than deciphering complier errors, untangling version incompatibilities, looking up function signatures, etc.
Oh, about the un-necessary loops thing. I was once hired by a PhD to build a software development department to convert his "academic matlab code" into an actual commercial application. One of my programmers was another PhD, and he successfully translated the matlab into C++ and it did run faster, and he parallelized it successfully (the latest hotness in 2005) but was still the slowest part of our whole operation. So, we did an old-fashioned code review and found an un-necessary loop, unwound that, got the exact same answers 100x faster. Apparently the PhDs were loop-slop blind, one had missed it for years and the other had been working on the translation/optimization/parallelization for months without seeing it.
because it’s all just temporary, so you throw every new request together in the fastest way you can
I have been finding many use cases for "throwaway" AI coded tools like parser based calculators... the old: "if it takes you a day to write it and it speeds up a week long task by even 20% then it's a win" thing has had the cost of "writing it" come down by a big factor for simple things - a parser tool thingy that might have taken 4 hours can now be certified good enough for internal use in 40 minutes, and if it's saving 3 hours of hand work that has turned a "meh, maybe we'll do this again sometime - I'd rather write the tool than drudge through 3 hours of brain-dead error prone work" into an overall big win.
I find a lot of the AI code hate coming from people who apparently have never managed software production by less-than-perfect programmers. If you handle AI code like it's coming from a team of consultants on a revolving-door staffing arrangement, it's about like that. Ideal? No. If you hire 9 women can you get them to have your baby within one month? No. But if you are handed 10 programmers who know almost nothing about your specific project, you generally can get them to produce useable code at about the rate of 3 programmers who are familiar with the project - iff you're disciplined about everything they do.
is germany increasing or decreasing the amount of coal it burns every year?
Increasing - mostly due to shutdown of their nuclear power.
For the tiny % of people who actually put it to good use, there’s 100x more abusing or mishandling it.
It's going to take a while, but hopefully that percentage improves over time. PCs in the 1990s were "Solitaire Stations" for an awful lot of people who didn't know how to make them do anything else.
The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether "fixing" them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren't when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you're going to ignore those, you're just making worse software than is possible with the current tools.
Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer's time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).
Just like human code reviews, if you just accept and implement every thing it says without thinking, you're an idiot.
I'll swag that the simulation process is complex, fraught with various pitfalls and idiosyncracies that require specialized training / experience to get it setup and running properly, even in 'rough' mode. I'll further swag that the reason it used to take 2 weeks was because there were about 50 engineering hours required to get it ready to do the number crunching - and those engineering hours can now be handled by the "creative writing machine" which has been trained in the various things it needs to know to match the expected patterns.