Agentic AI-based services are the new Shadow IT.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs im actively pitching a talk called 'claude is your insider threat now'
-
@briankrebs im actively pitching a talk called 'claude is your insider threat now'
@Viss @briankrebs I just had the exact same talk with our internal AI working group. I'm not sure it arrived, but they had quite interesting papers to read.
LLMs are a fascinating information science but kind of terrible tools.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs shadow IT usually originates from actual requirements that canβt be fulfilled by the IT. Meaning it solves someoneβs real problems, but in a wrong way.
AI agents donβt solve anyoneβs real problems (yet). They basically only create problems in any possible way. -
Agentic AI-based services are the new Shadow IT. Change my mind.
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs@infosec.exchange
On the plus side, step #1 of setting up things like an #AWS/#Azure/#GCP account β especially production ones β is to disable the ability to create IAM users (forcing the use of IAM-roles that are 2FA authenticated via a service like #Okta) β¦and the role-based authentication-tokens are typically TTLed to a couple hours.
Still, a "good" (suspicious-quotes) agent-setup would be pretty trivial to configure to snarf credentials from the relevant token-services. That triviality likely applies more broadly. -
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
-
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
-
@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
-
@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.
I wish I didn't have such a negative and cynical outlook on it all.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?
But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.
-
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
@grumpasaurus@infosec.exchange @briankrebs@infosec.exchange This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??
-
@grumpasaurus@infosec.exchange @briankrebs@infosec.exchange This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??
@steff @briankrebs ok i see your point.
Let me make a bad ass logo to go with it. It will make you think of darkwarrior duck but you won't be able to put a finger on it.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs ai daily brief tracks that as a metric and the podcast occasionally talks about how prevalent it is. the people that can answer that are the frontiers themselves; many avenues to inference exist and it's everywhere and I imagine plenty of audio recordings and eyeglass surveillance behind secure doors.
-
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
@briankrebs that sounds more like part of shadow-IT than a new version of it.
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs there are so many companies already whose business model is reining this in
-
Agentic AI-based services are the new Shadow IT. Change my mind.
I'm a bit old school, so: do you have the Excel sheets to prove it?
-
@briankrebs im actively pitching a talk called 'claude is your insider threat now'
@Viss @briankrebs Would love to watch if/when it's online
-
Agentic AI-based services are the new Shadow IT. Change my mind.
@briankrebs I mean, when the shadow outshines the object, is it a shadow anymore?