@yosh Uh, uhm.
-
@yosh Uh, uhm. I took a PhD course on how to model this and arrive roughly at that number (long story on how I was allowed to do this, even though I only hold a Masters). We did this on server-client systems, but what is a train but a request traveling through the internet.
Essentially, you can take timed petri-nets (a form of state machine with multiple states being active at once - think "trains on routes, in stations") and use a statistical function to model how long it takes for an token (train) in that net to go to the next state (every edge has a different function, runlength). You can then actually reason _locally_ about states at every node using markov chains. ("how many trains are at this station at any time with which probability") and then _back_-calculate that into delay probabilities.
Biggest learning: this isn't linear. It's almost linear, until the point it breaks, where delays grow rapidly and catastrophically over the network.
Which is also why 70% system load is about the moment where you should start thinking about upgrading your servers.
Sorry for the brain dump :).
-
@yosh Uh, uhm. I took a PhD course on how to model this and arrive roughly at that number (long story on how I was allowed to do this, even though I only hold a Masters). We did this on server-client systems, but what is a train but a request traveling through the internet.
Essentially, you can take timed petri-nets (a form of state machine with multiple states being active at once - think "trains on routes, in stations") and use a statistical function to model how long it takes for an token (train) in that net to go to the next state (every edge has a different function, runlength). You can then actually reason _locally_ about states at every node using markov chains. ("how many trains are at this station at any time with which probability") and then _back_-calculate that into delay probabilities.
Biggest learning: this isn't linear. It's almost linear, until the point it breaks, where delays grow rapidly and catastrophically over the network.
Which is also why 70% system load is about the moment where you should start thinking about upgrading your servers.
Sorry for the brain dump :).
-
@msfjarvis @yosh https://web.archive.org/web/20210425095523/https://www.cs.mun.ca/~wlodek/
Is the archived website of our professor back then.
At a quick glance, I think this paper seems to discuss most of it. https://scispace.com/pdf/modeling-and-performance-analysis-of-priority-queuing-3g6mmrdnxh.pdf
Like, this seriously informed my intuition for network applications, which was my career before Rust :).
-
undefined aeva@mastodon.gamedev.place shared this topic
-
@msfjarvis @yosh https://web.archive.org/web/20210425095523/https://www.cs.mun.ca/~wlodek/
Is the archived website of our professor back then.
At a quick glance, I think this paper seems to discuss most of it. https://scispace.com/pdf/modeling-and-performance-analysis-of-priority-queuing-3g6mmrdnxh.pdf
Like, this seriously informed my intuition for network applications, which was my career before Rust :).
@skade @msfjarvis @yosh as it happens dynamic resolution scaling and similar systems in video game rendering also work like that. 100% GPU load causes everything to go to hell (frame drops and worse) so we try to find setpoints that get us within some threshold below that. I don't remember what the magic % ended up being for the games I've worked on, but we try to run as close to the wire as we can and still have a stable cadence.
-
@skade @msfjarvis @yosh as it happens dynamic resolution scaling and similar systems in video game rendering also work like that. 100% GPU load causes everything to go to hell (frame drops and worse) so we try to find setpoints that get us within some threshold below that. I don't remember what the magic % ended up being for the games I've worked on, but we try to run as close to the wire as we can and still have a stable cadence.
@aeva @skade @msfjarvis @yosh meanwhile gamers: "ackhshually running the GPU at 100% all the time is good cause it means you're getting all your money's worth of performance" like those boomers going on about how they paid for the whole screen and stretched a 4:3 picture to 16:9 back when wide-screen TVs were becoming mainstream but a big chunk of media was still not
-
@aeva @skade @msfjarvis @yosh meanwhile gamers: "ackhshually running the GPU at 100% all the time is good cause it means you're getting all your money's worth of performance" like those boomers going on about how they paid for the whole screen and stretched a 4:3 picture to 16:9 back when wide-screen TVs were becoming mainstream but a big chunk of media was still not
@hazelnot @skade @msfjarvis @yosh well, you can do that, you just have to turn off vsync, and it's not an entirely unreasonable thing to do since it can greatly improve input latency at the cost of tearing
-
@hazelnot @skade @msfjarvis @yosh well, you can do that, you just have to turn off vsync, and it's not an entirely unreasonable thing to do since it can greatly improve input latency at the cost of tearing
@aeva @skade @msfjarvis @yosh lol fair, however, my GPU running at 100% heats up to over 110°C which ok it is rated for but it doesn't feel safe especially on a 6 year old card that could just die at any time anyway
-
@aeva @skade @msfjarvis @yosh lol fair, however, my GPU running at 100% heats up to over 110°C which ok it is rated for but it doesn't feel safe especially on a 6 year old card that could just die at any time anyway
@hazelnot @skade @msfjarvis @yosh yeah i don't recommend it personally
-
@hazelnot @skade @msfjarvis @yosh yeah i don't recommend it personally
@aeva actually I'm wondering, does disabling vsync but capping the framerate do anything? Cause that's that I've been doing for shooters to try to improve input lag without cooking my PC but I have no idea if it helps or if the input lag is caused by the capped framerate itself
-
@aeva actually I'm wondering, does disabling vsync but capping the framerate do anything? Cause that's that I've been doing for shooters to try to improve input lag without cooking my PC but I have no idea if it helps or if the input lag is caused by the capped framerate itself
@hazelnot all it does is trade hitching for tearing when too many frames are late