• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: December 25th, 2023

help-circle
  • Hey,

    Person here who despises electron apps in part because of the memory footprint and in part because I don’t like neither chromium nor node.js - personal preference mainly.

    From your description I have the feeling that it’s unclear to your user base if electron is set or up to debate. There is only a thin line between “explaining” and “defending”.

    In terms of communication: “We’re using electron as foundation because it allows us to focus on development. We’ve considered alternatives like Tauri and XYZ and opted in favor of electron.”

    If there are situations that might make you rethink state those as well (“if someone provides a proof of concept via XYZ that an alternative is faster by y% while enabling us to still use (your core libraries and languages) we might consider a refactor.”

    If you’d engage with me after an electron rant on your codebase you’d just raise my hope that I might change your mind! Don’t give people hope, don’t feed the trolls and do your thing!

    Just please be honest with yourself: your app doesn’t use “50 to 60 MB”, it uses 500MBish on idle because of your choice. And that’s okay as long as you as developer say that it is.





  • Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

    To be precise:

    LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.

    Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.

    To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

    We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…




  • That’s an utterly ignorant statement.

    To expect others, often volunteer, to take such a personal risk because the legislation in one part of the world is utterly fucked. How about expecting the people who actually live in the country and state and have a chance to influence those laws to step up their game instead of trying to tell third parties to take individual and personal consequence.




  • Traefik and caddy were mentioned, the third in the game is usually nginxproxymanager.

    I’m using both traefik and nginx in two different setups. The nginxproxymanager can be configured via UI natively which makes checking configurations a bit easier.

    Traefik on the other hand is configured easily within the compose itself and you have everything in one place.

    This turned out to be tiresome though if you don’t have a monolithic compose file - that’s actually even hr history why I switched to npm in the first place.

    I don’t have any experience with caddy so can’t provide anecdotal insights there.


  • I really like it already so take this as an alternative, not as improvement:l. I don’t have a good eye for aesthetics anyway don’t his is more about structure.

    Personally I switched from a single dashboard to purpose driven hubs - I can’t imagine a situation where I need my infrastructure and my calendar at the same time regularly for example.

    Another point is context typing: your release checker is quite far away from your appointments and calendar. It looks to me to be sorted by content rather then function (i.e. it’s entertainment so it’s next to YouTube). The same is true for your interaction patterns. There is a lot of visual information which I’m sure you’ll rarely interact with but instead consume. And then there are clearly external links, both bottom left (opencloud, tooling) and top right (external media) in addition to your own self hosted content.

    My suggestion is therefore a process instead of a change: Note down when you consume which features of this awesome dashboard together for a few days. Then restructure the content of the whole dashboard based on your usage patterns - either as a new Monolith or even experimenting with splitting it.

    I even suggest using a different medium then your usage device (if it’s a desktop PC mainly use pen and paper, if it’s your laptop use your phone, if it’s your phone you use this dashboard on then you might have different problems :D)


  • Sorry if I use the wrong English terms! I think you are right :) With system I refered to the literal computer system the file is saved on. I’m not a dev of one of those tools but I know several maintainers and developers that’s why I’m a bit sensitive there! Thats why I (baldy apparently, apologies!) tried to focus on the developer point of view and ignored the whole cost/benefit aspect which you described very well - thank you for that!

    Back to my point re/ local security because I feel this is the only one where I see a fundamentally different assessment between us: (Fontext: access an unencrypted file on my machine): I’m not aware of a mechanism to read (unencrypted or not) files on a host without a preceding incident. How else could your files be acessed? I don’t understand how I might have this backwards.

    You’re completely right if course that there are a lot of tools out there one could use - but it would be on the developer to implement support for those. If you support one you can be damn sure users shout for “I want to use Y”. And then you would still need a Fallback for anyone not willing to install a supported third party tools.


  • Cybersecurity works inherently with risk scenarios. Your comparison is flawed because you state that there is an absolute security hygiene standard.

    That said: I highly appreciate your approach to the subject, i.e. looking at the code and raising a discussion about something that looks wrong. Thank you for that!

    On the subject itself:

    There are two common ways to implement token management. The most common one I am aware of is actually the text based one. Even a lot of cloud services save passwords as environment variables after a vault got unlocked via IAM. That’s because the risk assessment is: If a perpetrator has access to these files the whole system is already corrupted - any encryption that gets decrypted locally is therefore also compromised.

    The second approach is to implement the OS level secret manager and what you’re implicitly asking for from my understanding.

    While I agree that this would be the “cleaner” solution it’s also destroying cross platform compatibility or increasing maintenance load linear to the amount of platforms used, with a huge jump for the second one: I now need a test pipeline with an OS different than what I’m using.