Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

A few days ago, a client’s data center "vanished" overnight.

Uncategorized
13 6 36
  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    @stefano nice story! and, yeah, internal monitoring is a must, but you also need an external one, operated by someone else than yourself.

  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    @stefano Only in BSDcafé can you read actual techno thrillers like this.

  • @stefano Only in BSDcafé can you read actual techno thrillers like this.

    @EnigmaRotor Sometimes the lights are low and the atmosphere is dark...

  • @EnigmaRotor Sometimes the lights are low and the atmosphere is dark...

    @stefano Stefano Jones P.A. a very noir series.

  • @stefano Stefano Jones P.A. a very noir series.

    @EnigmaRotor /me making coffee in the dark, while whispering some IT horror stories

  • @EnigmaRotor /me making coffee in the dark, while whispering some IT horror stories

    @stefano Oh, if genre is horror, then don’t forget to tell the tale of the guy who pronounced “Microsoft” 3 times before his mirror. What happened next, the blue mirror of death, is frightening to the bones.

  • oblomov@sociale.networkundefined oblomov@sociale.network shared this topic on
  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    @stefano feeling of :xkcd:`705` intensifies :D
  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    In the first sentence you mention a "data center", but such an attack would not work with a data center, to be one you need to have two buildings with independent power supply, at a safe distance, etc etc. I think this was at best a hosting room, not a data center.
  • In the first sentence you mention a "data center", but such an attack would not work with a data center, to be one you need to have two buildings with independent power supply, at a safe distance, etc etc. I think this was at best a hosting room, not a data center.

    @uriel sure - we tend to call "data center" a specific place, inside the company, that will host the servers (with A/C, etc). Maybe a little inappropriate, here.

  • A few days ago, a client’s data center (well, actually a server room) "vanished" overnight. My monitoring showed that all devices were unreachable. Not even the ISP routers responded, so I assumed a sudden connectivity drop. The strange part? Not even via 4G.

    I then suspected a power failure, but the UPS should have sent an alert.

    The office was closed for the holidays, but I contacted the IT manager anyway. He was home sick with a serious family issue, but he got moving.

    To make a long story short: the company deals in gold and precious metals. They have an underground bunker with two-meter thick walls. They were targeted by a professional gang. They used a tactic seen in similar hits: they identify the main power line, tamper with it at night, and send a massive voltage spike through it.

    The goal is to fry all alarm and surveillance systems. Even if battery-backed, they rarely survive a surge like that. Thieves count on the fact that during holidays, owners are away and fried systems can't send alerts. Monitoring companies often have reduced staff and might not notice the "silence" immediately.

    That is exactly what happened here. But there is a "but": they didn't account for my Uptime Kuma instance monitoring their MikroTik router, installed just weeks ago. Since it is an external check, it flagged the lack of response from all IPs without needing an internal alert to be triggered from the inside.

    The team rushed to the site and found the mess. Luckily, they found an emergency electrical crew to bypass the damage and restore the cameras and alarms. They swapped the fried server UPS with a spare and everything came back up.

    The police warned that the chances of the crew returning the next night to "finish" the job were high, though seeing the systems back online would likely make them move on. They also warned that thieves sometimes break in just to destroy servers to wipe any video evidence.

    Nothing happened in the end. But in the meantime, I had to sync all their data off-site (thankfully they have dual 1Gbps FTTH), set up an emergency cluster, and ensure everything was redundant.

    Never rely only on internal monitoring. Never.

    @stefano I must repeat this Never trust in onsite backups either. Fire will destroy those. And RAID is not backup.
    You know this but it bears repeating!

  • @uriel sure - we tend to call "data center" a specific place, inside the company, that will host the servers (with A/C, etc). Maybe a little inappropriate, here.

    Well, not "a little". The one you described is - at best - a server room, not even a hosting center, since according with the blueprints, there was no redundancy....
  • stefano@mastodon.bsd.cafeundefined stefano@mastodon.bsd.cafe shared this topic on
  • Well, not "a little". The one you described is - at best - a server room, not even a hosting center, since according with the blueprints, there was no redundancy....

    @uriel You're right. I've updated the original post to clarify it. Thank you for pointing it out!


Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    6 Views
    RE: https://mastodon.bsd.cafe/@stefano/115984116493117731Luckily, many of my clients are intelligent and well-prepared people. Needless to say, that email, before making me laugh, had already made the client laugh. He immediately thought he was dealing with people who were great at marketing but had little technical skill. I presented my theory on software engineering, but he immediately tore it apart, declaring himself extremely skeptical. In his opinion, it is more likely to be a technique to lower our defenses and then try to sell us "security products" after a "pentest full of flaws". Or simply sheer incompetence.Anyway, their connection hasn't any open ports. So they can pentest anything they want to, as long as they want to.#IT #SysAdmin #HorrorStories #PenTest
  • 0 Votes
    1 Posts
    6 Views
    @mwl@io.mwl.io’s post made me revisit RCS in a very small role: a safety net for individual files.Paired with nvi, a tiny wrapper lets me snapshot configs before risky edits. Simple, local, no magic.Example wrapper I’m using:#!/bin/sh## safeedit — RCS-backed safe editing with nvi#set -eif [ $# -ne 1 ]; then echo "usage: safeedit <file>" >&2 exit 1fiFILE="$1"if [ ! -f "$FILE" ]; then echo "safeedit: file not found: $FILE" >&2 exit 1fiDIR=$(dirname "$FILE")BASE=$(basename "$FILE")RCS_DIR="$DIR/RCS"RCS_FILE="$RCS_DIR/$BASE,v"mkdir -p "$RCS_DIR"chmod 700 "$RCS_DIR"if [ ! -f "$RCS_FILE" ]; then ci -l "$FILE"else ci -u "$FILE" || true co -l "$FILE"fiexec nvi "$FILE"nvi protects the session; RCS protects the decision.Original post by @mwl@snac.bsd.cafe: https://io.mwl.io/@mwl/115814245521209100#nvi #RCS #Unix #SysAdmin #ConfigManagement
  • Oh, finally!

    Uncategorized sysadmin
    5
    0 Votes
    5 Posts
    24 Views
    @stefano The people who demand moar powerrrr really need to drop their current workstation for a 266MHz Pentium 2 with 32MB RAM and a 5400 RPM hard drive.That will force them to design better code. Or pick a different career field altogether. 🤣
  • 0 Votes
    28 Posts
    49 Views
    No matter how you slice it, as I see it, it boils down to: Microsoft's software did insufficient input validation on untrusted data (specifically in this case configuration changes for one customer, it seems), and this led to widespread outages affecting many, many, *many* customers in many different ways.Security done properly should not rely on "blocking erroneous" stuff, but should verify that everything is good and only *if* that succeeds allow any of it through.@spytfyre @rysiek