Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
david_chisnall@infosec.exchangeundefined

David Chisnall (*Now with 50% more sarcasm!*)

@david_chisnall@infosec.exchange
About
Posts
74
Topics
21
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • LLVM used to have mailing lists.
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    LLVM used to have mailing lists. This was pretty bad. The main llvm-dev list was a firehose. It got so many message that I’d just file them and then skim subjects later. There were a bunch of smaller lists, but often discussions would start there and then need wider participation, so people would cc some of the bigger lists. Fine, except that not everyone was on all of the lists, so some messages would be stopped at moderation and the people who were on only the larger lists would see a random fragment of a thread. And maybe their replies wouldn’t be seen by people on other lists.

    The move to Discourse was meant to fix this, but it’s actually managed to make things worse. I really cannot get over how awful Discourse is as a piece of software. There is an LLVM Cambridge social next week. There is a post about it on Discourse. The only reason o know about this is that the author sent me a Signal message. Searching for ‘Cambridge’ gives me a handful of messages for 10+ years ago (imported from the mailing lists), not the recent one. It doesn’t show up in the new messages feed (neither do most things there).

    I basically never see things on Discourse unless someone tags me (and then I get an email) or someone sends me an out-of-band link to a thread. I know far less about what is happening than when I subscribed to the mailing lists. I am far less able to find things in searches.

    I voted in favour of the Discourse switch because the problems with the mailing lists were real, but the solution is much worse. I suspect the right solution for a big project like this is:

    • Make the ACLs for all lists accept posts from anyone who is subscribed to any of them, so cross-posting to related lists works.
    • Provide a read-only IMAP system with all of the messages for all lists in it so anyone can just add it in their mail client and have the full history (or whatever subset they ask their client to download).
    Uncategorized

  • During production of Finding Nemo, we started using Linux boxes in addition to SGIs.Why?
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @Drwave

    This was a while ago, so forgive my foggy memory (I wrote a chapter of a popular Linux book that described this at the time):

    I believe Linux started with the 2 GiB / 2 GiB split and also a direct map of all physical memory (all physical memory was mapped into the kernel’s VA space). This had a few advantages:

    • You can tell apart user and kernel addresses by looking at the top bit.
    • You can always pin a page and then use the result of the current translation to get a VA you can use for access anywhere in the kernel, not just threads associated with the current process.

    By the early 2000s, machines with more than 2 GiB of physical memory were affordable by places running Linux and so the direct map had to go away. I think this work started with PAE. PAE meant you could have more physical memory than you had virtual, which made this much worse, so 32-bit kernels with PAE couldn’t use a direct map at all and had to allocate mappings to fill things into the kernel’s address space on demand, but gradually this became necessary for everyone (I think PAE was a compile-time option for the kernel for a while).

    User processes grew memory requirements faster than the kernel, so there was a configuration option to choose where this split was. I’m not sure when Linux went to 3 GiB, but by the have enough physical memory for this to make sense you’re past the point where a direct map is feasible, so there are no downsides. I think it was possible to arrange the split the other way, which was useful for file servers and similar where you wanted a lot of memory for the buffer cache and very little for userspace.

    Red Hat went further. They had kernel builds with a 4 GiB / 4 GiB split. Userspace got almost the full 32-bit address space and the syscall code switched to an entirely different page table on transition (meltdown mitigation later required a similar mechanism. Sensible ISAs have different page-table base registers for user and kernel space and make this easy. Sadly, x86 is not one of these). This was slow because every system call that took a pointer required the kernel to look up the address translation in the userspace page tables, map the relevant pages into its address space, and then copy. Even with caching, this was painful. Oh, and the trick of self-mapped page tables can’t be used in this configuration (and was covered by a Microsoft patent at the time).

    64-bit kernels are much nicer for running 32-bit userspace because they can give the userspace process the entire low 4 GiB and map that into the kernel’s address space for use by top-half threads. This leads to a slightly odd situation that it’s possible to find 32-bit userspace programs that will run successfully only on 64-bit kernels.

    Uncategorized

  • Oh no, our competitors are doing stupid things!
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @webhat This is repeated a lot, but has almost no basis in fact.

    Shareholders can sue if the company fails to disclose known business risks (I expect a lot of these lawsuits once the bubble bursts, because most companies know that they are pissing money down the drain and are lying to their shareholders). For example, when I was at Microsoft, I was told that the lawyers said that releasing Copilot without a mechanism to attribute output to sources in the training data was high risk and would expose the company to significant liability but Kevin Scott overrode the,. I was not in the room when this was discussed, so I don’t know if it’s true. If it is then that would be grounds for a shareholder lawsuit.

    If the company fails to do something that shareholders believe will increase value, they can vote to replace members of the board, who can then vote to replace the senior leadership. But if shareholders believe the company strategy is wrong, their primary recourse is to sell their shares, they don’t have grounds for a lawsuit (if they did, companies would be sued all the time by crackpots who thing not investing in the Time Cube is a missed opportunity).

    The big worry for a lot of companies is that shareholders will sell. Public companies raise capital by slowly diluting their stock. If a company’s share price is growing 10% a year and they issue 1% new stock, that gets absorbed in the growth and lets them realise 1% of their market cap each year. For a billion dollar company, that’s $10,000,000 of, effectively, free money. That pays a lot of salaries or funds a lot of capital investment. But it’s possible only because the stock price is going up. Once it starts going down, issuing more stock makes it drop even faster.

    Uncategorized

  • *Edit*: here at least, I am clearly not isolated!
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @neil @amin

    Before Christmas a couple of big companies cancelled AI-generated ad campaigns because the negative feedback was causing harm to their brand at the start of their peak selling season. The more people complain about these things (don’t share the ads, just say ‘company X used to be okay but their latest ad campaign is slop and it makes me hate them’ - ad agencies consider people sharing the ads to be positive for raising brand awareness even if people hate them), the more that feedback will go from ad companies to their customers.

    For an example of the parenthetical: about 15 years ago, Tango did an ad campaign that had people rolling fruit down Constitution Hill in Swansea, where it smashed at the bottom. There were a bunch of news articles about how they didn’t bother to clean up and left the mess for residents. There was only one problem: all of the fruit was CGI, there was no mess. The negative press made a load of people watch the ad. The claim that they made a mess was ‘leaked’ from the ad agency to news sources who didn’t do any basic fact checking (I lived just around the corner, it was easy for someone to pop down and see there was no mess). The campaign was considered a big success. So if people share an ad and say ‘I hate this’, it won’t necessarily have the right result. But if they share a single terrible frame, it might.

    Uncategorized

  • Louder for people at the back:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @krypt3ia

    See the news today. That’s no longer happening, even Wall Street eventually catches onto the obvious.

    Uncategorized

  • Louder for people at the back:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @gotofritz I believe this is a special case of 'You are lying about productivity gains'.

    Uncategorized

  • Louder for people at the back:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    RE: https://mstdn.social/@rysiek/116226720041425679

    Louder for people at the back:

    If ‘AI’ gives you a 20% productivity increase, in an economic system that rewards growth at the expense of everything else, the rational thing for any company to do is use that productivity increase to expand into new markets. This may involve some redundancies because you need different skills for the new opportunities but they will be matched by increased hiring in the other areas. If you and your competitors both see a 20% increase in productivity and you use it to make people redundant and they use it to ship more products in more areas, then they will grow at your expense. Their products will be better than yours and you will lose market share.

    If you are claiming that you have redundancies because ‘AI’ is increasing productivity, then one of the following is true:

    • Your leadership team does not understand market economics (in which case, investors should worry that the board has not replaced obviously incompetent leadership).
    • You are an unchallengeable monopoly and have already filled all adjacent markets and have literally no possibility of growth (in which case, investors should take note and set their price predictions based on today’s revenue, with no expectation of future growth, which would wipe out over 80% of Meta’s market cap).
    • You are lying about productivity gains (in which case, investors should worry about what else you’re lying about and should start prodding the SEC to investigate).
    Uncategorized

  • I have a load of kitchen utensils made of silicone.
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    I have a load of kitchen utensils made of silicone. They’re great: heatproof so you can leave them in the pan, poor thermal conductors so doing so doesn’t burn your hands, and soft so they don’t damage non-stick things.

    But I remain in awe of whichever materials scientist looked at stone and said ‘this is great, but it would be better if we made it squidgy’ and then did it. Who looks at stone and decides it should be squidgy?

    Uncategorized

  • Oh no, our competitors are doing stupid things!
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    Oh no, our competitors are doing stupid things! We must also do stupid things or be left out!

    — CEOs everywhere.

    Uncategorized

  • Love getting flagged by an AI service for using AI to plagiarise coursework.
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @LewisWorkshop @SecurityWriter

    Conversely, when I was made to use TurnItIn for a course I taught, I tried feeding it the course notes file as a test. It claimed it was 70% plagiarised. The thing it was plagiarised from? The online copy of the course notes, which was bit-for-bit identical to the one I gave it, yet showed up as only 70% similar.

    Uncategorized

  • Happy International Women's Day!
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @sundogplanets

    I’m not eligible to reply to the poll, but in my last job I was on the Diversity and Inclusion committee, which was predominantly female. I once sat through a meeting where a (male) senior HR rep spend 15 minutes mansplaining inclusive meeting behaviour and no one was able to cut him off. So I’m quite surprised that the results so far are so low.

    Uncategorized

  • Something I want to make clear:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @sarahjamielewis That assumes that blocking an app is the desired thing, rather than age-gating some of the content (for example, consider a video streaming client that has age restrictions. These are currently implemented in a very ad-hoc way).

    Uncategorized

  • Something I want to make clear:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @sarahjamielewis

    My reading of the law was that this is bad wording and was not the intent. The rest of it reads is if it means to say that if age verification is required for other legal purposes then you must use this 2-bit signal unless you already have some other information that you know is more accurate.

    It should be easy to fix, it’s a shame they went through a load of revisions without fixing it (I didn’t look at the text of the old drafts, the error may have been introduced in editing).

    Uncategorized

  • My experience of the #fediverse :
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @_elena

    People who use X are increasingly radicalised until they end up as Nazis.

    People who use Mastodon are increasingly radicalised until they end up as C programmers.

    Fediverso fediverse thefutureisfederated yunohost

  • So, I have actually read the text of California law CA AB1043 and, honestly, I don't hate it.
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @drahardja The law doesn't specify a particular implementation, it specifies only that:

    • They must exist.
    • There must be some documented API to get the age range.

    In particular, it doesn't specify what that API is, but does specify that it must be coarse-grained (giving no more information than the four age ranges, and not giving the precise age or date of birth).

    Uncategorized

  • So, I have actually read the text of California law CA AB1043 and, honestly, I don't hate it.
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    So, I have actually read the text of California law CA AB1043 and, honestly, I don't hate it. It requires operating systems to let you enter a date when you create a user account and requires a way for software to get a coarse-grained approximation of this that says either 'over 18' or one of three age ranges of under-18s. Importantly, it doesn't require:

    • Remote attestation.
    • Tamper-proof storage of the age.
    • Any validation in the age.

    In short, it's a tool for parents: it allows you to set the age of a child's account so that apps (including web browsers, which can then expose via JavaScript or whatever) can ask questions about what features they should expose.

    In a UNIX-like system, this is easy to do, with a tiny amount of new userspace things:

    • Define four groups for the four age ranges (ideally, standardise their names!).
    • Add a /etc/user_birthdays file (or whatever name it is) that stores pairs of username (or uid) and birthdays.
    • Add a daily cron job that checks the above file and updates group membership.
    • Modify user-add scripts / GUIs to create an entry in the above file.
    • Add a tool to create an entry in the above file for existing user accounts.

    This doesn't require any kernel changes. Any process can query the set of groups that the user is in already.

    If a parent wants to give their child root, they can update the file and bypass the check. And that's fine, that's a parent's choice. And that's what I want.

    I like this approach far more than things that require users to provide scans of passports and other toxically personal information to be able to use services. If we had this feature, then the Online Safety Act could simply require that web browsers provide a JavaScript API to query the age bracket and didn't work unless it returned 'over 18'.

    Uncategorized

  • Move over "who's on first"
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @futurebird

    Every time my yoga teacher talks about softening the gaze, or the gaze helping with something, this is what I hear.

    Uncategorized

  • I like passkeys*
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @dlakelan @whitequark

    The downside of a pure software implementation is that a kernel vulnerability can still exfiltrate all of your keys. Hardware (including TPM)-based implementations are robust against this because the kernel can ask the device to do the signing, but that only allows online attacks: while someone compromises your machine, they can log in as you, but they can't exfiltrate your credentials.

    Apple's design is nice because the keys are resident only in the secure element, but they do have a flow for copying them to another secure element (encrypted with a key that's negotiated during the exchange), but that locks you into their ecosystem.

    The problem that they're trying to solve is a very hard one: How do you make it easy to copy your keys to another device but make it hard for a malicious person to force or trick you into copying the keys to their device? This is relatively easy within a closed ecosystem, but it's much harder to do in an open model.

    Uncategorized

  • i built an entire x86 CPU emulator in CSS (no javascript)
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    @rebane2001

    Is this practical?

    Not really, you can get way better performance by writing code in CSS directly rather than emulating an entire archaic CPU architecture.

    Beautiful.

    Uncategorized

  • I have a new technique for reliably vibecoding apps:
    david_chisnall@infosec.exchangeundefined david_chisnall@infosec.exchange

    I have a new technique for reliably vibecoding apps:

    First, you write your requirements in an unambiguous specification language. This is the prompt, but to disambiguate it from less precise prompts, we will call it the source of truth encoding, or source code for short. You then feed it to an agent that will create an of outputs by applying some heuristic-driven transforms that are likely (but not guaranteed) to improve performance. This agent compiles a load of information about how to transform the code into a single pipeline, so we’ll call it a ‘compiler’. This then feeds to the next agent that finds missing parts of the program and tries to fill them in with existing implementations. This is more efficient than simply generating new code and more reliable since the existing implementations are better tested. This agent has a knowledge base of existing code organised in grouping that I’ll refer to as ‘libraries’. It creates links in that web of knowledge between the outputs of the first agent and these existing ‘libraries’ and so we’ll call it a ‘linker’.

    I think it might catch on. VCs: I think we can build this thing for only a couple of hundred million dollars! And the compute requirements are far lower than for existing agentic workflows, so we can sell it as a service and become profitable far sooner than other AI startups. Sign up now for our A round! We have a working proof of concept that can output the Linux kernel, LibreOffice, and many other large codebases from existing prompts!

    Uncategorized
  • 1 / 1
  • Login

  • Login or register to search.
  • First post
    Last post