Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
skade@hachyderm.ioundefined

Florian Gilcher

@skade@hachyderm.io
About
Posts
2
Topics
1
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • @yosh Uh, uhm.
    skade@hachyderm.ioundefined skade@hachyderm.io

    @msfjarvis @yosh https://web.archive.org/web/20210425095523/https://www.cs.mun.ca/~wlodek/

    Is the archived website of our professor back then.

    At a quick glance, I think this paper seems to discuss most of it. https://scispace.com/pdf/modeling-and-performance-analysis-of-priority-queuing-3g6mmrdnxh.pdf

    Like, this seriously informed my intuition for network applications, which was my career before Rust :).

    Uncategorized

  • @yosh Uh, uhm.
    skade@hachyderm.ioundefined skade@hachyderm.io

    @yosh Uh, uhm. I took a PhD course on how to model this and arrive roughly at that number (long story on how I was allowed to do this, even though I only hold a Masters). We did this on server-client systems, but what is a train but a request traveling through the internet.

    Essentially, you can take timed petri-nets (a form of state machine with multiple states being active at once - think "trains on routes, in stations") and use a statistical function to model how long it takes for an token (train) in that net to go to the next state (every edge has a different function, runlength). You can then actually reason _locally_ about states at every node using markov chains. ("how many trains are at this station at any time with which probability") and then _back_-calculate that into delay probabilities.

    Biggest learning: this isn't linear. It's almost linear, until the point it breaks, where delays grow rapidly and catastrophically over the network.

    Which is also why 70% system load is about the moment where you should start thinking about upgrading your servers.

    Sorry for the brain dump :).

    Uncategorized
  • 1 / 1
  • Login

  • Login or register to search.
  • First post
    Last post