Yes, recently I complained about a local goon and accidentally accused them of being a Gen Alpha, but they were late Gen Z. To prevent incorrect generational discrimination I propose an extension of the generations in the format {generation label}-{birth datetime in ISO 8601}-{country code}:{postalcode}-{firstname}:{lastname}-{random-id}. Now I can correctly complain how the "GenZ-2007-05-01T09:00:00Z-de:33604-Lazlo-Ailton- 16b849e3-3368-4df0-b4c1-56e9cf46c5fb" are lazy slowpokes who will be the doom of the world.
TheV2
Unfortunately I fail to do it regularly, but I love it. On the one hand it helps me to wrap up the day, clear my head and fall asleep. On the other hand I love to reflect on snapshots of my past thoughts and experiences.
It also depends on the type of journal. For example I maintain a dream journal, too, which improves your dream recall. Furthermore discovering patterns, confusions and in general yourself in your dreams is fascinating.
Letters don't have colors, but people may associate a specific color with a letter. It could be influenced by logos, symbols and just about anything that affected us personally in our life. It's not a logical binding.
E.g. I can imagine that a lot of people will associate the letter "x" with the color red, because they are often displayed in that color, especially when it symbolizes deletion. Perhaps someone was a big fan of the pro wrestling stable D-Generation X and therefore they see x in a green color. Another person thinks of a black X, because they are addicted to Twitter.
However I think most of us don't know why exactly we associate a color to a letter and it's the result of a looooot of links and associations.
If you were a writer and I helped you with research, e.g. I suggest an adjective to you at your request that you even dismiss after some pondering, then is it correct to say that you used me to write for you? Is it the same as if I was your ghostwriter?
My point is not that ChatGPT for research is awesome, but that the article's headline and OP's conclusion are very misleading. While I can relate to that enthusiasm, I don't believe in mincing down a source to my narrative. It's even counter-productive to spread awareness about ChatGPT's incorrectness using an incorrect takeaway from someone's statements...
I mean, there is a huge difference between using ChatGPT for research and using it to actually write for you. Unless you speculate that he didn't reveal an example of the latter out of fear, this isn't as dramatic as the title sounds like.
Not that I have the discipline to keep writing constantly at all. But even if I could do that, I restrain myself from working on a very specific spot too long. When you face that boomerang editing period, I think it's time to take a break from that section/piece and to become (somewhat) a stranger to your work again.
I don't follow the advice directly and instead my goal is to not let my thoughts rot in my brain. Furthermore I try to have a healthy amount of parallel projects and naturally when it's time to turn away from project A, I let my rising motivation for another project take over. Not that I have the discipline to maintain a healthy balance between my parallel projects...
It doesn't hurt to make a sub-category community, but please also post to c/Music. I certainly hope that on Lemmy we can stop defaulting to the majority language and culture.
Well, people are people. It might be unpleasant and rude to systemically browse people, especially if you need to interrupt and clarify your prompts, basically treating them like chatbots.
The way I see it, you should differiante how you approach human and inhuman sources. Information from people have a lot of advantages, too. You might get your target information quickly without any "bloat" from e.g. an encyclopedia article. However you might lose a lot of key information. A person forces you into an interactive bi-directional conversation. They will get information from you and more likely add additional information you need.
For example you might get the commonly accepted translation of an ancient poem from an article. A human can give you that, too, but if they notice that you are absolutely not familiar with the subject, they might also clarify that a literal translation is not possible, that you need the context of this historical event and so on. In a inhuman source you might have skipped that information. This is how some people may use a trusted source, but still leave with fake news, because the extracted information is incomplete.
In reality you can't reduce AI to replace only "Level 1 coding" and do only "typing". It will make assumptions about these "Level 2 and 3" decisions in its generated code. To reduce or control it you have to invest more into documentation/instructions and code review. You basically change the focus based on the assumption that "Level 1 coding" with all of that "hand-crafted" code was such a big waste of time and money. But it's a made-up problem.
On top of that a lot of vibe-coded projects that appear here and there seem to not even intend to let the AI do only the typing. They don't just let the AI translate "flow" and "architecture" into code. They make the AI translate their demands into code.
The beginning confusion was an intentional trap by the author. The author's real confusion comes only later.
Not really the meat of your question, but in my opinion it already lost its punch at its creation, given its etymology. I wish we would have hit the reset button and clarify that when "Semites" and "antisemite" refer to only Jews, it's only in the historical context of Nazi Germany. Sometimes I wonder, if I should simply use antisemitism for the hatred against Palestine, too, and that without any context or explanation.