Bug Report
Describe the issue:
The font size of the comments appears to be random (see attached screenshot).
The global and comment font size settings don't seem to have any effect.
It doesn't seem to correlate with comment length, depth, age, score, or any other variable I can think of.
I just updated the app and it's still happening.
Steps to reproduce:
- Open a post's comments.
- Scroll. Unfold folded comments.
- Notice the rich variety of font sizes.
- Struggle to read the smaller ones because you're not as young as you used to be.
Device Information
- App Version: 1.0.344 (344)
- Platform: android
- OS Version: SKQ1.220303.001 test-keys
- Notice: Using legacy Shared Preferences
Modified Settings
The following settings have been changed from defaults:
- alwaysShowInstance:
true (default: false)
- alwaysTrustDomains:
true (default: false)
- shouldUseHighRefresh:
false (default: true)
- shouldShowPageNumbers:
true (default: false)
- mediaSecondaryAction:
download (default: none)
- isVolumeNavigationEnabled:
true (default: false)
- shouldAlwaysDisplayAvatars:
true (default: false)
- shouldHighlightNewComments:
true (default: false)
- nsfwView:
show (default: blur)
- applyNsfwInCommunities:
false (default: true)
- cardType:
fullwidth (default: card)
- postActions:
[comment, save] (default: [comment, save, none, crosspost])
- shouldCombineMarkdownImages:
false (default: true)
- shouldRemembercommentPosition:
false (default: true)
- shouldDimBrightBackgrounds:
false (default: true)
- shouldForceBlackImageBackground:
false (default: true)
- textSizeMultiplier:
1.25 (default: 1.0)
- maxCacheSizeGB:
1 (default: 2)
Exactly. When it comes to code, for instance, what percentage of the training data is Knuth, Carmack, and similarly skilled programmers, and what percentage is spaghetti code perpetrated by underpaid and uninterested interns?
Shitty code in the wild massively outweighs properly written code, so by definition an LLM autocomplete engine, which at best can only produce an average of its training model, will only produce shitty code. (Of course, though, average or below average programmers won't be able — or willing — to recognise it as shitty code, so they'll feel like it's saving them time. And above average programmers won't have a job anymore, so they won't be able to do anything about it.)
And as more and more code is produced by LLMs the percentage of shitty code in the training data will only get higher, and the shittiness will only get higher, until newly trained LLMs can only produce code too shitty to even compile, and there will be no programmers left to fix it, and civilisation will collapse.
But, hey, at least the line went up for a while and Altman and Huang and their ilk will have made obscene amounts of money they didn't need, so it'll have been worth it, I suppose.