wolf

joined 2 years ago
MODERATOR OF
[–] wolf@lemmy.zip 4 points 6 months ago

At the moment, I am trying to clear ascension 20.

My chance of winning the game up to ascension 5 is > 50%.

IMHO StS gives the player bad habits in the lower difficulties (and the difficult spike when reaching the heart is not that great).

Some tips stolen from better players than me:

  • You really have to play optimal and think about every card to minimize life loss (This could also mean taking a hit early and kill off an enemy faster vs. blocking an early hit and taking massive damage later.)
  • The bosses especially can be seen as problem to solve. Look ahead, do you have the right card(s) in the deck to solve the problem?
  • The first hall is mostly about about up front damage, in the second you will need some area of effect damage etc.
  • Before adding a card to your deck, answer the following questions:
    • How high is the chance I can even play this card? (Example: You have 3 energy and a card costs 2 Energy. If you have another 2 Energy card, you know increase your chance of a dead draw, because you can not play both if they appear in the same hand)
    • Does this card has any synergy with the cards I already have in my deck or with any artifacts?
    • Does this card solve a problem (e.g. boss or enemy) I have? IMHO all the generic advice is not wrong (like having as little cards as is possible), but the point about playing the higher ascensions is really more seeing 'the whole' instead of focusing on one aspect. For example, if you have Corruption in a bottle and Dark Embrace, all of a sudden you want to have as many skills a possible... OTOH, if you have two Dropkicks, you want everything which destroys cards to have an endless Dropkick-Engine as soon as possible.
  • Take care of immediate problems you know you will face (like the boss of the current hall), instead of speculating on card combinations which might or might not show up in the future. (Exception to the speculation rule: Iron Clad has Limit Break, and it is total reasonable to expect to find some strength boost as an Iron Clad, so LimitBreak is usually the one card which I never think about picking up)

tldr: Picking your strategy and adding/removing of cards must be seen in the context of artifacts, energy and the bosses you will see. Optimize for your next known problem, instead of betting in cards to become available. You can have 1-2 cards for special situations or as speculation, but adding for example another attack card if you already have enough of attack, simply doesn't solve a problem or makes your deck stronger.

[–] wolf@lemmy.zip 2 points 7 months ago

Yes, I didn't thing too much about food/calories in the past, so when I read about the connection it is in hindsight obvious, but I didn't get the idea by myself.

[–] wolf@lemmy.zip 3 points 7 months ago

Thanks a lot! Great write up, and the energy-stored view of calories makes a lot of sense and is very intuitive!

[–] wolf@lemmy.zip 2 points 7 months ago

Thank you for your answer!

[–] wolf@lemmy.zip 2 points 7 months ago

Thanks a lot! To the point and on an abstraction level that is very clear!

 

According to a book I am reading, diet science currently agrees that there is one way to loose weight: A calorie deficit.

For example, if I need 2000 kcal a day and eat only 1500 kcal a day, I will loose some weight over the next weeks/months.

To my understanding, calories here are totally interchangeable, if we are only concerned with loosing weight (and ignore nutrients etc).

Calories are basically measured by burning food and measuring how much energy was set free.

My question is: Why and how does it work so good and why are calories interchangeable?

In more detail: Why can we translate the burning of calories with fire to processing the calories in food with our digestion system so perfect? Why is there no difference (concerning weight loss), if I eat 1500 calories as pure sugar or eat them as pure protein (where I would assume the body needs more energy to break down the protein)?

[–] wolf@lemmy.zip 2 points 8 months ago

Same for me. I put an unholy amount of time in StS on all platforms and haven't all achievements yet! :-) OTOH, there are worse ways to waste your lifetime. ;-)

[–] wolf@lemmy.zip 3 points 8 months ago (2 children)

Ninja Gaiden Ragebound: Not finished yet, but having a total blast playing it. Great/responsive controls, level design is great and enemies telegraph their attacks properly, like it should be in an action game

Street Fighter 6: Gave Sagat a try

Slay the Spire: Acension level 18, want to make it to 20 before the 2nd part gets into early access

 

Question is in the title: What is the supposed workflow for vanilla Gnome for keyboard users?

Is there any video/design documents which explain, how the workflow is supposed to be?

Assume, I have a full screen web browser on workspace 1. Now I want to have a terminal... I hit the super-key, type terminal, hit enter ... and then I have a terminal which does not start maximized on workspace 1, so I can either maximize the terminal and switch between the applications, arrange them side by side... or I can navigate to workspace 2, start the terminal there (the terminal will not start maximized again on an empty workspace 2) ... and switch between the two workspaces (AFAIK there are no hotkeys specified by default to navigate directly to a workspace)...

What I simply do not understand: Does the vanilla Gnome workflow expect you to use mouse and keyboard? Like hit super, use mouse to go to next workspace, type terminal, click to maximize terminal (or use super-up)?

It just seems like a lot of work/clicks/keys to achieve something simple. And to my understanding Gnome expects you to use basically every application with a full screen window anyway, so why does it not open a new application on the next free workspace full screen by default?

[–] wolf@lemmy.zip 2 points 9 months ago

Not sure if it is applicable, but wouldn't it be an option to use the Fedora Workstation Live CD, mount your swap partition into the live system and send it to sleep via SystemD?

This should give you feedback with a fairly recent kernel and Gnome has (at least for me) been the desktop option with the least amount of bugs I encountered.

[–] wolf@lemmy.zip 9 points 9 months ago* (last edited 9 months ago) (2 children)

Before asking for another distro, you should figure out, what is the root cause of the trouble you observe. Usually sleep/wake up under Linux are highly hardware dependent. Even the SteamDeck, which has payed first level hardware support by Valve, has sometimes trouble waking up properly after sleep, at least in desktop mode. Good luck!

[–] wolf@lemmy.zip 2 points 9 months ago (1 children)

Thanks, but could you clarify which extension to move for Gnome? native window placement is AFAIK just for the overview.

[–] wolf@lemmy.zip 2 points 9 months ago (1 children)

Which extensions do I need?

 

When using TMUX, it is easy to create a script, which opens TMUX, configures the screens/panes of TMUX and open/run programs.

I like this a lot.

My baseline would be something like, when I login, some applications are executed and their windows automatically placed on a virtual desktop.

For example:

  • Open Firefox and put it on virtual desktop 1
  • Open Terminal in fullscreen and put it on virtual desktop 2
  • Open VSCode and put it on virtual desktop 3

Something like that is possible with sway, in the environment I am working, sway is not able to run XWayland applications w/o crashing.

Is there any way to have this functionality on Gnome, Mate, Xfce?

Even better would be something to open several windows and arrange them automatically for different work tasks/projects I am working on. Any ideas?

Edit: Solved! Thanks for the input. Auto Move Windows extension for Gnome solves my problem.

[–] wolf@lemmy.zip 32 points 9 months ago

Ah, sorry to read - I like the idea of Bcachefs and would have been happy to have it ready for production eventually.

OTOH it seems the recent years I read more about the drama about Bcachefs commits to the kernel, than about any technical parts of Bcachefs.

 

Shout out to the great Hungarian people! :-)

 

Hello, fellow Linux users!

My question is in the titel: What is a good approach to deploy docker images on a Raspberry Pi and run them?

To give you more context: The Raspberry Pi runs already an Apache server for letsencrypt and as a reverse proxy, and my home grown server should be deployed in a docker image.

To my understanding, one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a 'latest' tag and use Systemd with Docker or Podman to execute the image.

My questions:

  • Has anyone here had a similar problem but used a different approach to achieve this?
  • Has anyone here automated this whole pipeline that in a perfect world, I just push updated sources to the Raspberry Pi, the new docker image gets build and Docker/Podman automatically pick up the new image?
  • I would also be happy to be pointed at any available resources (websites/books) which explain how to do this.

At the moment I am using Raspbian 12 with a Raspberry Pi Zero 2 W and the whole setup works with home grown servers which are simply deployed as binaries and executed via systemd. My Docker knowledge is mostly from a developer perspective, so I know nearly nothing about deploying Docker on a production machine. (Which means, if there is a super obvious way to do this I might not even be aware this way exists.)

1
submitted 11 months ago* (last edited 11 months ago) by wolf@lemmy.zip to c/linux@lemmy.ml
 

For one user account, I want to have some bash scripts, which of course would be under version control.

The obvious solution is just to put the scripts in a git repository and make ~/bin a symlink to the scripts directory.

Now, it seems on systemd systems ~/.local/bin is supposedly the directory for user scripts.

My question, is mostly, what are the tradeoffs between using ~/bin and ~/.local/bin as directory for my own bash scripts?

One simple scenario I can come up with are 3rd party programs which might modify ~/.local/bin and put their own scripts/starters there, similar to 3rd party applications which put their *.desktop files in ~/.local/applications.

Any advice on this? Is ~/.local/bin safe to use for my scripts or should I stick to the classic ~/bin? Anyone has a better convention?

(Btw.: I am running Debian everywhere, so I do not worry about portability to non systemd Linux systems.)

Solved: Thanks a lot for all the feedback and answering my questions! I'll settle with having my bash scripts somewhere under ~/my_git_monorepo and linking them to ~/.local/bin to stick to the XDG standard.

 

Just to be clear: My main point of sharing this article is about how detached the life of rich people is.

I do not at all agree that most of them are “Not evil, just disconnected”, I am pretty sure (and know some of the privileged), who are actively evil, know exactly what they are doing and just don't give a shit.

0
submitted 2 years ago* (last edited 2 years ago) by wolf@lemmy.zip to c/linux@lemmy.ml
 

... I mean, WTF. Mozilla, you had one job ...

Edit:

Just to add a few remarks from the discussions below:

  1. As long as Firefox is sponsored by 'we are not a monopoly' Google, they can provide good things for users. Once advertisement becomes a real revenue stream for Mozilla, the Enshittification will start.
  2. For me it is crossing the line when your browser is spying on you and if 'we' accept it, Mozilla will walk down this path.
  3. This will only be an additional data point for companies spying on you, it will replace none of the existing methodologies. Learn about fingerprinting for example
  4. Mozilla needs to make money/find a business model, agreed. Selling you out to advertisement companies cannot be it.
  5. This is a very transparent attempt of Mozilla to be the man in the middle selling ads, despite the story they tell. At that point I can just use Chrome, Edge or Safari, at least Google has expertise and the money to protect my data and sadly Chrome is the most compatible browser (no fault of Mozilla/Firefox of course).
  6. Mozilla massively acts against the interests of their little remaining user base, which is another dumb move made by a leadership team earning millions while kicking out developers and makes me wonder what will be next.
 

Interesting times ahead! I am really looking forward to the Leap Micro release and hope it advances the state of the art. :-)

 

For years now, I do not buy/create assemble a new computer, because I am totally overwhelmed by the options available to me.

If we agree there is 'The Paradox of Choice', it seems to make sense to have a much more limited choice between CPU models from a consumer point of view. For example, have for each year an entry, business and a pro model, add extreme for gamer and have each of these models have a version with a beefy integrated CPU.

But it seems also a good idea for the manufacturers: They have to design, test and build each of their models, create advertisement etc., like configuring their assembly lines alone costs money. Further, compilers have to generate code for a specific architecture, which means that all my software I didn't compile myself ends up using an instruction set of the lowest common CPU, not utilizing whatever I bought fully.

Apple (not a fan ;-)) shows IMHO how it is done with their Apple Silicon: Basically even I understand which CPU choice would be the right one for me. The Steam Deck is IMHO another success story: As reference hardware I know easily if I can play a game, and it is easy to know if my hardware is faster than a Steam Deck. Compare that to games with hardware requirements like 'AMD TI 5800 8GB RAM' (made up model) which makes my life miserable.

What I am looking for is fact based knowledge:

  • Why does it make (commercial) sense for AMD/Intel to create so many models?
  • What are their incentives?
  • What would happen, if they would reduce the amount of different CPUs they offer? (Is there historical knowledge?)
 

There is a similar question on the site which must not be named.

My question still has a little different spin:

It seems to me that one of the biggest selling points of Nix is basically infrastructure as code. (Of course being immutable etc. is nice by itself.)

I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible. It seems to me, that a lot of the vocal Nix user (fans) switched from a pet desktop and discover IaC via Nix, and that they are in the end raving about IaC (which Nix might or might not be a good vehicle for).

When I gave Silverblue a try, I totally loved it, but then to configure it for my needs, I basically would have needed to configure the host system, some containers and overlays to replicate my Debian setup, so for me it seemed like too much effort to arrive nearly at where I started. (And of course I can use distrobox/podman and have containerized environments on Debian w/o trouble.)

Am I missing something?

0
submitted 2 years ago* (last edited 2 years ago) by wolf@lemmy.zip to c/linux@lemmy.ml
 

What are your 'defaults' for your desktop Linux installations, especially when they deviate from your distros defaults? What are your reasons for this deviations?

To give you an example what I am asking for, here is my list with reasons (funnily enough, using these settings on Debian, which are AFAIK the defaults for Fedora):

  • Btrfs: I use Btrfs for transparent compression which is a game changer for my use cases and using it w/o Raid I had never trouble with corrupt data on power failures, compared to ext4.

  • ZRAM: I wrote about it somewhere else, but ZRAM transformed even my totally under-powered HP Stream 11" with 4GB Ram into a usable machine. Nowadays I don't have swap partitions anymore and use ZRAM everywhere and it just works (TM).

  • ufw: I cannot fathom why firewalls with all ports but ssh closed by default are not the default. Especially on Debian, where unconfigured services are started by default after installation, it does not make sense to me.

My next project is to slim down my Gnome desktop installation, but I guess this is quite common in the Debian community.

Before you ask: Why not Fedora? - I love Fedora, but I need something stable for work, and Fedoras recent kernels brake virtual machines for me.

Edit: Forgot to mention ufw

view more: next ›