Agentic AI-based services are the new Shadow IT.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs@infosec.exchange
On the plus side, step #1 of setting up things like an #AWS/#Azure/#GCP account — especially production ones — is to disable the ability to create IAM users (forcing the use of IAM-roles that are 2FA authenticated via a service like #Okta) …and the role-based authentication-tokens are typically TTLed to a couple hours.
Still, a "good" (suspicious-quotes) agent-setup would be pretty trivial to configure to snarf credentials from the relevant token-services. That triviality likely applies more broadly. -
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
-
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
-
@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
-
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.
I wish I didn't have such a negative and cynical outlook on it all.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?
But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.
-
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
@grumpasaurus@infosec.exchange @briankrebs@infosec.exchange This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??
-
@grumpasaurus@infosec.exchange @briankrebs@infosec.exchange This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??
@steff @briankrebs ok i see your point.
Let me make a bad ass logo to go with it. It will make you think of darkwarrior duck but you won't be able to put a finger on it.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs ai daily brief tracks that as a metric and the podcast occasionally talks about how prevalent it is. the people that can answer that are the frontiers themselves; many avenues to inference exist and it's everywhere and I imagine plenty of audio recordings and eyeglass surveillance behind secure doors.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs that sounds more like part of shadow-IT than a new version of it.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs there are so many companies already whose business model is reining this in
-
Agentic AI-based services are the new Shadow IT. Change my mind.
I'm a bit old school, so: do you have the Excel sheets to prove it?
-
@briankrebs im actively pitching a talk called 'claude is your insider threat now'
@Viss @briankrebs Would love to watch if/when it's online
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs I mean, when the shadow outshines the object, is it a shadow anymore?
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs Why? You are correct 😄
-
@briankrebs I mean, when the shadow outshines the object, is it a shadow anymore?
@danielkennedy74 That reminds me of some optics: See Obscured Airy pattern @ https://en.wikipedia.org/wiki/Airy_disk
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs I have to admit.... earlier this week I spent like 5 hours trying to get this Ubiquiti camera system to work. I tried everything I could think of.
finally, I just gave ssh access to claude code, set it on no-permission-necessary and told it to keep trying to get those cameras online until they work. then went out and had a nice dinner with my wife, a couple glasses of wine.
Came back to shut the thing off.... all set worked perfectly. Still running.
so. If you folks don't think you can be replaced (at least partially) with AI, think again.
-
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
-
@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
@ai6yr @SecureOwl @briankrebs
Random songs? Not Rick Astley? -
@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.
I wish I didn't have such a negative and cynical outlook on it all.
@mrmoore @briankrebs HIPAA has some teeth and frankly I would be shocked if a bunch of attorneys *haven't* violated their professional oaths. More importantly, while the US may be a privacy nightmare the EU and UK do have a bit more to say on the matter, with regulations that have teeth.