Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

How long has it been since you last used a meal kit service?

Uncategorized
11 6 11

Gli ultimi otto messaggi ricevuti dalla Federazione
  • @cancel ty

    read more

  • @mos_8502 nvidia has a $3999 computer, the DGX Spark, that is roughly the same speed and specs as the AMD option, in benchmarks, but the nvidia ecosystem is more mature. And, ASUS makes a mini PC with the same nvidia chipset and specs for $2999. So, if you need the nvidia level of compatibility, you have to spend more for 128GB. But, that's become less important in the past year or so as AMD has invested in AI tooling. So, for now, AMD is a bargain, relative to Apple or nvidia.

    read more

  • @vwbusguy @ids1024 My thinking is, longer term, for whatever use can be made of it, I would prefer a little box I can put on a shelf that sits on my local network and provides roughly the same interface as Claude or ChatGPT. Something that doesn’t suck down too much power.

    read more

  • @ids1024 @mos_8502 The worst thing is offloading GPU compute to system memory for a large model. It can be swapping on a spinner bad. The good news is it's unlikely an individual would really need to use the larger model versions.

    read more

  • @mos_8502 @vwbusguy As I understand, for large AI models you want a lot of VRAM (ideally more than even high end modern gaming GPUs). So that old GPU and workstation hardware wouldn't be especially helpful. (If you want it to be reasonably fast.)

    read more

  • @mos_8502 so, it continues to be cheapest to buy the investor-subsidized compute and GPU that OpenAI, Anthropic, Google, Microsoft, etc. want to provide. But, it does feel bad to me to trust in that or become dependent on that, especially since the models are all proprietary and I don't know what they're doing exactly or what they're doing with my data or that they'll do in the future.

    read more

  • @mos_8502 pytorch can use cpu. You'll just have to be a little more patient.

    read more

  • @mos_8502 GLM 4.7 can run in 205GB at the 2-bit quantization. Some of that can be system memory, so if you had a system with a couple of large GPUs and a ton of system RAM, you could run a very good open model...among the best, comparable to Sonnet 4.5. Still not Opus/Codex 5.2 level, but it'll write working code. https://unsloth.ai/docs/models/glm-4.7

    read more
Post suggeriti
  • Just vote

    Uncategorized poll
    3
    0 Votes
    3 Posts
    1 Views
    @shimst3r @jbz according to the first result¹ of my internet search a ratio of 1:√2 was used² by medieval monks, so the answer is either A4 for all categories, or possibly A3 for the monks, if they are copying something that needs to be read by the choir¹ monnikenwerk.art/tutorials/man…² rarely? not as often as other ratios? nevermind, I'm not going to let reality stop me in my way to rightfully shitpost!
  • #Poll time!

    Uncategorized poll
    1
    0 Votes
    1 Posts
    11 Views
    #Poll time!If you had to choose between using only your Smartphone or Laptop, which one would you choose?
  • 0 Votes
    4 Posts
    19 Views
    @rperezrosario I assumed programmer+fediverse=Linux, because that's my feed. But, also, over the three+decades I've been working in tech, I've seen more and more devs being Linux first devs, to the point where the company I work for now has been almost entirely Linux users among the dev team. I don't keep close tabs on everyone, but when it comes up, it's more Linux than anything else, with MacOS second. Everything cloud and web is Linux native, so it's all easier on Linux.
  • 0 Votes
    2 Posts
    14 Views
    Thanks to everyone. I will be participating in #Fediforum and will propose a session on Two Years of October 7 on the Fediverse. So, yes.