Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
vwbusguy@mastodon.onlineundefined

Scott Williams 🐧

@vwbusguy@mastodon.online
About
Posts
5
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Okay. So the AI bubble bursts.
    vwbusguy@mastodon.onlineundefined vwbusguy@mastodon.online

    @ids1024 @mos_8502 The worst thing is offloading GPU compute to system memory for a large model. It can be swapping on a spinner bad. The good news is it's unlikely an individual would really need to use the larger model versions.

    Uncategorized

  • Okay. So the AI bubble bursts.
    vwbusguy@mastodon.onlineundefined vwbusguy@mastodon.online

    @mos_8502 pytorch can use cpu. You'll just have to be a little more patient.

    Uncategorized

  • Okay. So the AI bubble bursts.
    vwbusguy@mastodon.onlineundefined vwbusguy@mastodon.online

    @mos_8502 Hank Green gave a good breakdown of the it.

    https://youtu.be/H_c6MWk7PQc

    Uncategorized

  • Okay. So the AI bubble bursts.
    vwbusguy@mastodon.onlineundefined vwbusguy@mastodon.online

    @mos_8502 There's a lot of it depends, but most of the cost is AI is in training the models rather than the queries themselves. If you have a machine with a newish GPU and you download a model like granite or phi, the cost is your time and a nominal amount of electricity.

    Uncategorized

  • Okay. So the AI bubble bursts.
    vwbusguy@mastodon.onlineundefined vwbusguy@mastodon.online

    @mos_8502 Mainly get a capable model (like granite) locally on your box and call it with pytorch. I have a python script on my laptop for calling llm queries completely locally. Having a newish GPU will help.

    Uncategorized
  • Login

  • Login or register to search.
  • First post
    Last post