Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

It's that time of year again!

Uncategorized
40 1 0

Gli ultimi otto messaggi ricevuti dalla Federazione
  • @jonny "my French lover, le Chat GPT". Jesus Christ. She's coming in hot with the red flags, and the book I want is how to never meet her.

    read more

  • Roma: il “villaggio sportivo” dell’esercito al quarticciolo. “e’ propaganda militare mascherata da iniziativa sociale”
    @anarchia
    Martedì 21 ottobre, al Quarticciolo, quartiere della periferia est di Roma, verrà allestito un villaggio sportivo organizzato dall’Esercito Italiano presso la parrocchia Ascensione NSGC. L’iniziativa

    read more

  • the ghoulish hollowness of soul to even formulate the idea "i want to be the kind of person who has written a book about how to date me, but i don't want to write it, but i do want to sell it for $4.99"

    read more

  • RE: https://neuromatch.social/@jonny/115409736697961808

    where is Hunter S. Thompson when you need a "The chatGPT girlboss dinner is decadent and depraved"

    read more

  • The Lambda Papers: When LISP Got Turned Into a Microprocessor

    The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
    During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One of these dialects was Scheme, developed by [Guy L. Steele] and [Gerald Jay Sussman], who wrote a number of articles that were published by the Massachusetts Institute of Technology (MIT) AI Lab as part of the AI Memos. This subset, called the Lambda Papers, cover the ideas from both men about lambda calculus, its application with LISP and ultimately the 1980 paper on the design of a LISP-based microprocessor.

    Scheme is notable here because it influenced the development of what would be standardized in 1994 as Common Lisp, which is what can be called ‘modern Lisp’. The idea of creating dedicated LISP machines was not a new one, driven by the processing requirements of AI systems. The mismatch between the S-expressions of LISP and the typical way that assembly uses the CPUs of the era led to the development of CPUs with dedicated hardware support for LISP.

    The design described by [Steele] and [Sussman] in their 1980 paper, as featured in the Communications of the ACM, features an instruction set architecture (ISA) that matches the LISP language more closely. As described, it is effectively a hardware-based LISP interpreter, implemented in a VLSI chip, called the SCHEME-78. By moving as much as possible into hardware, obviously performance is much improved. This is somewhat like how today’s AI boom is based around dedicated vector processors that excel at inference, unlike generic CPUs.

    During the 1980s LISP machines began to integrate more and more hardware features, with the Symbolics and LMI systems featuring heavily. Later these systems also began to be marketed towards non-AI uses like 3D modelling and computer graphics. As however funding for AI research dried up and commodity hardware began to outpace specialized processors, so too did these systems vanish.

    Top image: Symbolics 3620 and LMI Lambda Lisp machines (Credit: Jason Riedy)

    hackaday.com/2025/10/20/the-la…

    read more

  • @reiver provocative!

    read more

  • As a biologist, I have determined that shorebirds can be divided into 6 categories:

    - Short bill long legs fren
    - Fancy billed big bois
    - Short billed round bois
    - Short kings w big bills
    - Long legged supermodels
    - Silly-lookin' babies

    read more

  • When I die, I want my kids to get a dog and name him Dad.

    read more
Post suggeriti
  • 0 Votes
    3 Posts
    2 Views
    Summary of A Philosophy of Software Design by John Ousterhout Source: danlebrero.com These are notes by Daniel Lebrero Berna on John Ousterhout’s A Philosophy of Software Design. Some advice in the book goes against the current software dogma. The current dogma is the result of previous pains, but has now been taken to the extreme, causing new pains. What the author solves with “Comment-First Development,” others solve with Test-Driven Development. The excuses for not writing comments mirror those for not writing tests. Key Insights It’s easier to see design problems in someone else’s code than your own. Total complexity = Σ(complexity of part × time spent on that part). Goal of good design: make the system obvious. Complexity accumulates incrementally, making it hard to remove. Adopt a “zero tolerance” philosophy. Better modules: interface much simpler than implementation (Deep modules). Design modules around required knowledge, not task order. Adjacent layers with similar abstractions are a red flag. Prioritize simple interfaces over simple implementations. Each method should do one thing and do it completely. Long methods are fine if the signature is simple and the code easy to read. Difficulty naming a method may indicate unclear design. Comments should add precision or intuition. If you aren’t improving the design when changing code, you’re probably making it worse. Comments belong in the code, not commit logs. Poor designers spend most of their time chasing bugs in brittle code. Preface The most fundamental problem in computer science is problem decomposition. The book is an opinion piece. The goal: reduce complexity. 1. Introduction (It’s All About Complexity) Fight complexity by simplifying and encapsulating it in modules. Software design is never finished. Design flaws are easier to see in others’ code. 2. The Nature of Complexity Complexity = what makes code hard to understand or modify. Total complexity depends on time spent in each part. Complexity is more obvious to readers than writers. Symptoms: change amplification, cognitive load, unknown unknowns. Causes: dependencies, obscurity. Complexity accumulates incrementally; remove it aggressively. 3. Working Code Isn’t Enough Distinguish tactical (short-term) from strategic (long-term) programming. The “tactical tornado” writes lots of code fast but increases complexity. 4. Modules Should Be Deep A module = interface + implementation. Deep modules have simple interfaces, complex implementations. Interface = what clients must know (formal + informal). Avoid “classitis”: too many small classes increase system complexity. Interfaces should make the common case simple. 5. Information Hiding (and Leakage) Information hiding is key to deep modules. Avoid temporal decomposition (ordering-based design). Larger classes can improve information hiding. 6. General-Purpose Modules Are Deeper Make modules somewhat general-purpose. Implementation fits current needs; interface supports future reuse. Questions to balance generality: What is the simplest interface covering current needs? How many times will it be used? Is the API simple for current use? If not, it’s too general. 7. Different Layer, Different Abstraction Adjacent layers with similar abstractions are a red flag. Pass-through methods and variables add no value. Fix pass-throughs by grouping related data or using shared/context objects. 8. Pull Complexity Downwards Prefer simple interfaces over simple implementations. Push complexity into lower layers. Avoid configuration parameters; compute reasonable defaults automatically. 9. Better Together or Better Apart? Combine elements when they: Share information. Are used together. Overlap conceptually. Simplify interfaces or eliminate duplication. Developers often split methods too much. Methods can be long if they are cohesive and clear. Red flag: one component requires understanding another’s implementation. 10. Define Errors Out of Existence Exception handling increases complexity. Reduce exception points by: Designing APIs that eliminate exceptional cases. Handling exceptions at low levels. Aggregating exceptions into a common type. Crashing when appropriate. 11. Design It Twice Explore at least two radically different designs before choosing. 12. Why Write Comments? The Four Excuses Writing comments improves design and can be enjoyable. Excuses: “Good code is self-documenting.” False. “No time to write comments.” It’s an investment. “Comments get outdated.” Update them. “Comments are worthless.” Learn to write better ones. 13. Comments Should Describe Things That Aren’t Obvious Comments should add precision and intuition. Document both interface and implementation. 14. Choosing Names Names should be precise and consistent. If naming is hard, the design likely isn’t clean. 15. Write the Comment First Like TDD, comment-first helps design, pacing, and clarity. 16. Modifying Existing Code Always improve design when changing code. Comments belong in code, not commit logs. 17. Consistency Don’t “improve” existing conventions without strong reason. 19. Software Trends Agile and TDD often promote tactical programming. 20. Designing for Performance Simpler code tends to be faster. Design around the critical path. 21. Conclusion Poor designers spend their time debugging brittle systems.
  • 0 Votes
    1 Posts
    3 Views
    #freebsd non solo un'alternativa a Linux, ma la scelta per server e stabilità #server #Unix #opensource @diggita @opensource https://webappsmagazine.blogspot.com/2025/10/freebsd-non-solo-unalternativa-linux-ma.html
  • 0 Votes
    1 Posts
    3 Views
    Sorry, I don't do rust, but I still thought this was hilarious!#rustlang #Tylenol #Programming
  • 0 Votes
    1 Posts
    7 Views
    Hey all! I'm Raius, a freelance illustrator and indie game dev. I love drawing, writing music, storytelling, and programming, among others.My personal loves are fantasy, slice-of-life, life sims, narrative games, and low-budget, handmade-with-love projects of all kinds.Here's my site: https://raiusgames.github.io/#introduction #introductions #art #mastoart #illustration #gamedev #indiedev #programming #music #writing #visualnovel #vn #game #gaming