IMO, fork is the best git client for macOS/Windows
At first glance it looks like a SourceTree clone. What does fork provide that SourceTree doesn't?
IMO, fork is the best git client for macOS/Windows
At first glance it looks like a SourceTree clone. What does fork provide that SourceTree doesn't?
Does anyone have any good sources or suggestions on how I could look to try and begin to improve documentation within my team?
Documentation in software projecte, more often than not, is a huge waste of time and resources.
If you expect your docs to go too much into detail, they will quickly become obsolete and dissociated from the actual project. You will need to waste a lot of work keeping them in sync with the project, with little to no benefit at all.
If you expect your docs to stick with high-level descriptions and overviews, they quickly lose relevance and become useless after you onboard to a project.
If you expect your docs to document usecases, you're doing it wrong. That's the job of automated test suites.
The hard truth is that the only people who think they benefit from documentation are junior devs just starting out their career. Their need for docs is a proxy for the challenges they face reading the source code and understanding how the technology is being used and how things work and are expected to work. Once they go through onboarding, documentation quickly vanishes from their concerns.
Nowadays software is self-documenting with combination of three tools: the software projects themselves, version control systems, and ticketing systems. A PR shows you what code changes were involved in implementing a feature/fixing a bug, the commit logs touching some component tells you how that component can and does change, and ticketing shows you the motivation and the context for some changes. Automated test suites track the conditions the software must meet and which the development team feels must be ensured in order for the software to work. The higher you are in the testing pyramid, the closer you are to document usecases.
If you care about improving your team's ability to document their work, you focus on ticketing, commit etiquette, automated tests, and writing clean code.
If you had a grasp on the subject you'd understand that it takes more than mindlessly chanting "tools" to actually get tangible improvements, and even I'm that scenario often they come with critical tradeoff.
It takes more than peer pressure to make a case for a tool.
That seems like a poor attitude imo.
Why do you believe that forcing something onto everyone around you is justifiable? I mean, if what you're pushing is half as good as what you're claiming it to be, wouldn't you be seeing people lining up to jump on the bandwagon?
It's strange how people push tools not based on technical merits and technological traits, but on fads and peer pressure.
Clearly Rust is a conspiracy.
Anyone in software development who was not born yesterday is already well aware of the whole FOMO cycle:
The whole idea to check the donations came from stumbling upon this post which discussed costs per user.
Things should be put into perspective. The cost per user is actually the fixed monthly cost of operating an instance divided by the average number of active users.
In the discussion you linked to, there's a post on how Lemmy.ml costs $80/month + domain name to serve ~2.4k users. If we went through opex/users metric, needlessly expensive setups with low participation would be a justification to ask for more donations.
Regardless, this is a good reminder that anyone can self-host their own Lemmy instance. Some Lemmy self-host posts go as far as to claim a Lemmy instance can be run on a $5/month virtual private server from the likes of scaleway.
Is there something else I’m not seeing?
Possibly payment processing fees. Some banks/payment institutions charge you for a payment.
This is a flippant statement, honestly, as it disregards the premise of the discussion. It’s memory safety.
You're completely ignoring even the argument you're supposedly commenting,let alone the point it makes. You're parroting cliches while purposely turning a blind eye to the point made in the blog that yes C can be memory safe. Likewise, Rust also has CVEs due to memory safety bugs. So, beyond these cliches, what exactly are you trying to argue?
In fact a reasonable compromise is possible: rust is perfectly capable of interoperating with the C languages.
I doubt you work on software for a living, because not only are you arguing a problem in a desperate need for a solution but also no one in their right mind would think it is a good idea to double the tech stacks and development environments and pipelines and everything, and with that greatly increase the cognitive load require to develop even the simplest features, just to... For what exactly? What exactly is your value proposition, and what tradeoffs, if any, you took into account?
You are free to do whatever you feel like doing in your pet projects. Rewrite them in as many languages you feel like using. In professional settings where managers have to hire people and have time and cash budgets and have to show bugs and features being finished, this sort of nonsense doesn't fly.
The core principle of computer-science is to continue moving forward with tech, and to leave behind the stuff that doesn’t work.
I'm not sure you realize you're proving OP's point.
Rewriting projects from scratch by definition represent big step backwards because you're wasting resources to deliver barely working projects that have a fraction of the features that the legacy production code already delivered and reached a stable state.
And you are yet to present a single upside, if there is even one.
At this point it’s literally easier to slowly port to a better language than it is to try and ‘fix’ C/C++.
You are just showing the world you failed to read the article.
Also, it's telling that you opt to frame the problem as "a project is written in C instead of instead of actually secure and harden existing projects.
The problem isn’t a principle of a computer science, but one of just safety.
I think you missed the point entirely.
You can focus all you want in artificial Ivory tower scenarios, such as a hypothetical ability to rewrite everything from scratch with the latest and greatest tech stacks. Back in the real world, that is a practical impossibility in virtually all scenarios, and a renowned project killer.
In addition, the point stressed in the article is that you can add memory safety features even to C programs.
Also, who said this is a principle of computer science?
Anyone who devotes any resource learning software engineering.
Here's a somewhat popular essay in the subject:
https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
$1500/year sounds an awful lot for a site that barely receives any updates even from the public.
What efforts are being developed to lower operational costs?