Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

The number of people who don't know about #GPUSPH within #INGV is too damn high (.jpg).

Uncategorized
1 1 0
  • The number of people who don't know about within is too damn high (.jpg).

    Memes aside, I've had several opportunities these days to talk with people both within the Osservatorio Etneo and other branches of the Institute, and most of them had no idea something like that was being developed within INGV.

    On the one hand, this is understandable, especially for teams that have never had a direct need to even look for code because of the focus of their research.

    On the other hand, this also shows that I should have been much more aggressive with marketing the project internally. (And don't even get me started on who had the actual managerial power to do so before me, but that would put me on a rant that I'd rather avoid for now.)

    I'm glad I've finally started working on this aspect, but also I can't say I'm too happy about having to do so.

    Hopefully this is something that will help bring mass to it.


Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • "When they used to play the good ones.

    Uncategorized
    1
    1
    0 Votes
    1 Posts
    1 Views
    "When they used to play the good ones. Long ago"Apollo in Real Timehttps://apolloinrealtime.org/> A real-time interactive journey through the Apollo missions. Relive every moment as it occurred.
  • Fit check.

    Uncategorized
    1
    1
    0 Votes
    1 Posts
    0 Views
    Fit check. Do you like my style?
  • 0 Votes
    12 Posts
    28 Views
    @stefano the results where a chock 😂😂
  • 0 Votes
    1 Posts
    0 Views
    Today I introduced a much-needed feature to #GPUSPH.Our code supports multi-GPU and even multi-node, so in general if you have a large simulation you'll want to distribute it over all your GPUs using our internal support for it.However, in some cases, you need to run a battery of simulations and your problem size isn't large enough to justify the use of more than a couple of GPUs for each simulation.In this case, rather than running the simulations in your set serially (one after the other) using all GPUs for each, you'll want to run them in parallel, potentially even each on a single GPUs.The idea is to find the next avaialble (set of) GPU(s) and launch a simulation on them while there are still available sets, then wait until a “slot” frees up and start the new one(s) as slots get freed.Until now, we've been doing this manually by partitioning the set of simulations to do and start them in different shells.There is actually a very powerful tool to achieve this on the command, line, GNU Parallel. As with all powerful tools, however, this is somewhat cumbersome to configure to get the intended result. And after Doing It Right™ one must remember the invocation magic …So today I found some time to write a wrapper around GNU Parallel that basically (1) enumerates the available GPUs and (2) appends the appropriate --device command-line option to the invocation of GPUSPH, based on the slot number.#GPGPU #ParallelComputing #DistributedComputing #GNUParallel