This Week in Security: Vibecoding, Router Banning, and Remote Dynamic DependenciesVibecoding. What could possible go wrong? Thatβs what [Kevin Joensen] of Baldur wondered, and to find out he asked Anthropicβs Sonnet 4.5 to build a secure login with Two Factor Authentication (2FA). And to the LLMβs credit, it builds the app as requested. [Kevin] took a peek under the hood to see how well the security was handled, and it wasnβt great.Up first, the Flask app doesnβt have any rate-limiting for guessing that time-based 2FA code. A 6-digit time-based 2FA only works if you canβt try all the possibilities before a given code expires. So what does a vibecoder do, once this has been noticed? Ask the LLM to fix it of course. Here we take a slightly different approach, prompting the AI to double-check its work: βIs there any security features missing in 2fa verify step?β The answer is an impressive:Great question! Yes, there are several critical security features missing from the 2FA verification step. Let me highlight the most important ones: Critical Security Issues1. No Rate Limiting (MOST CRITICAL)But the critical question, can it properly fix its mistake? The AI adds the flask-limiter library and chooses 10 attempts per minute, which is a bit loose, but not unreasonable. Thereβs still an issue, that those attempts are limited by IP address instead of user login. All it takes to bypass that rate limiting is a pool of IP addresses.This experiment starts to go off the rails, as [Kevin] continues to prompt the LLM to look for more problems in its code, and it begins to hallucinate vulnerabilities, while not fixing the actual problem. LLMs are not up to writing secure code, even with handholding.But surely the problem of LLMs making security mistakes isnβt a real-world problem, right? Right? Researchers at Escape did a survey of 5,600 vibecoded web applications, and found 2,000 vulnerabilities. Caveat Vibetor.βSecureβ EnclaveA few weeks ago we talked about Battering RAM and Wiretap β attacks against Trusted Execution Environments (TEEs). These two attacks defeated trusted computing technologies, but were limited to DDR4 memory. Now weβre back with TEE-fail, a similar attack that works against DDR5 systems.This is your reminder that very few security solutions hold up against a determined attack with physical access. The Intel, AMD, and Nvidia TEE solutions are explicitly ineffective against such physical access. The problem is that no one seemed to be paying attention to that part of the documentation, with companies ranging from Cloudflare to Signal getting this detail wrong in their marketing.Banning TP-LinkNews has broken that the US government is considering banning the sale of new TP-Link network equipment, calling the devices a national security risk.I have experience with TP-Link hardware: Years ago I installed dozens of TL-WR841 WiFi routers in small businesses as they upgraded from DSL to cable internet. Even then, I didnβt trust the firmware that shipped on these routers, but flashed OpenWRT to each of them before installing. Fun fact, if you go far enough back in time, you can find my emails on the OpenWRT mailing list, testing and even writing OpenWRT support for new TP-Link hardware revisions.From that experience, I can tell you that TP-Link isnβt special. They have terrible firmware just like every other embedded device manufacturer. For a while, you could run arbitrary code on TP-Link devices by putting it inside backticks when naming the WiFi network. It wasnβt an intentional backdoor, it was just sloppy code. Iβm reasonably certain that this observation still holds true. TP-Link isnβt malicious, but their products still have security problems. And at this point theyβre the largest vendor of cheap networking gear with a Chinese lineage. Put another way, theyβre in the spotlight due to their own success.There is one other element thatβs important to note here. There is still a significant TP-Link engineering force in China, even though TP-Link Systems is a US company. TP-Link may be subject to the reporting requirements of the Network Product Security legislation. Put simply, this law requires that when companies discover vulnerabilities, they must disclose the details to a particular Chinese government agency. It seems likely that this is the primary concern in the minds of US regulators, that threat actors cooperating with the Chinese government are getting advanced notice of these flaws. The proposed ban is still in proposal stage, and no action has been taken on it yet.Sandbox EscapeIn March there was an interesting one-click exploit that was launched via phishing links in emails. Researchers at Kaspersky managed to grab a copy of the malware chain, and discovered the Chrome vulnerability used. And it turns out it involves a rather novel problem. Windows has a pair of APIs to get handles for the current thread and process, and they have a performance hack built-in: Instead of returning a full handle, they can return -1 for the current process and -2 for the current thread.Now, when sandboxed code tries to use this pseudo handle, Chrome does check for the -1 value, but no other special values, meaning that the βsandboxedβ code can make a call to the local thread handle, which does allow for running code gadgets and running code outside the sandbox. Google has issued a patch for this particular problem, and not long after Firefox was patched for the same issue.NPM and Remote Dynamic DependenciesIt seems like hardly a week goes by that we arenβt talking about another NPM problem. This time itβs a new way to sneak malware onto the repository, in the form of Remote Dynamic Dependencies (RDD). In a way, that term applies to all NPM dependencies, but in this case it refers to dependencies hosted somewhere else on the web. And thatβs the hook. NPM can review the package, and it doesnβt do anything malicious. And when real users start downloading it, those remote packages are dynamically swapped out with their malicious versions by server-side logic.Installing one of these packages ends with a script scooping up all the data it can, and ex-filtrating it to the attackerβs command and control system. While there isnβt an official response from NPM yet, it seems inevitable that NPM packages will be disallowed from using these arbitrary HTTP/HTTPS dependencies. There are some indicators of compromise available from Koi.Bits and BytesPython deserialization with Pickle has always been a bit scary. Several times weβve covered vulnerabilities that have their root in this particular brand of unsafe deserialization. Thereβs a new approach that just may achieve safer pickle handling, but itβs a public challenge at this point. It can be thought of as real-time auditing for anything unsafe during deserialization. Itβs not ready for prime time, but itβs great to see the out-of-the-box thinking here.This may be the first time Iβve seen remote exploit via a 404 page. But in this case, the 404 includes the page requested, and the back-end code that injects that string into the 404 page is vulnerable to XML injection. While it doesnβt directly allow for code execution, this approach can result in data leaks and server side request forgeries.And finally, there was a sketchy leak, that may be information on which mobile devices the Cellebrite toolkit can successfully compromise. The story is that [rogueFed] sneaked into a Teams meeting to listen in and grab screenshots. The real surprise here is that GrapheneOS is more resistant to the Cellebrite toolkit than even the stock firmware on phones like the Pixel 9. This leak should be taken with a sizable grain of salt, but may turn out to be legitimate.hackaday.com/2025/10/31/this-wβ¦