Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Does anyone know how to get egregious typos on w3schools fixed?

Uncategorized
2 2 0

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    3 Views
    When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging. Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive. I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine. Several readers wanted to see a real-world example rather than theory. The problem: existing loggers assume you're building an app Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like: Did the HTTP request actually go out? Was the signature generated correctly? Did the remote server reject it? Why? Was there a problem parsing the response? These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork. But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues. I looked at the existing options. With winston or Pino, I would have to either: Configure a logger inside Fedify (imposing my choices on users), or Ask users to pass a logger instance to Fedify (adding boilerplate) There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons. None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user. The solution: hierarchical categories with zero default output The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it. Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized: Category What it logs ["fedify"] Everything from the library ["fedify", "federation", "inbox"] Incoming activities ["fedify", "federation", "outbox"] Outgoing activities ["fedify", "federation", "http"] HTTP requests and responses ["fedify", "sig", "http"] HTTP Signature operations ["fedify", "sig", "ld"] Linked Data Signature operations ["fedify", "sig", "key"] Key generation and retrieval ["fedify", "runtime", "docloader"] JSON-LD document loading ["fedify", "webfinger", "lookup"] WebFinger resource lookups …and about a dozen more. Each category corresponds to a distinct subsystem. This means a user can configure logging like this: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Show errors from all of Fedify { category: "fedify", sinks: ["console"], lowestLevel: "error" }, // But show debug info for inbox processing specifically { category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" }, ], }); When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration. Request tracing with implicit contexts The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries. In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request. Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId: await configure({ sinks: { file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter }) }, loggers: [ { category: "fedify", sinks: ["file"], lowestLevel: "info" }, ], contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts }); With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs: jq 'select(.properties.requestId == "abc-123")' fedify.jsonl And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed. The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure. What users actually see So what does all this configuration actually mean for someone using Fedify? If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops. For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem. And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in. The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need. Lessons learned Building Fedify with LogTape taught me a few things: Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently. Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically. Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant. Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in. Try it yourself If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it. The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that. LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
  • 1 Votes
    17 Posts
    69 Views
    Sure, that works too, Photon was just a play on the original name and the name for the particle for light, which is kinda important for photos: https://en.wikipedia.org/wiki/Photon
  • 0 Votes
    1 Posts
    8 Views
    Cose da vedere nel tempo aziendale: vecchi sistemi operativi emulati in #JavaScript https://www.pcjs.org/
  • 0 Votes
    1 Posts
    11 Views
    We're pleased to announce the release of Optique 0.5.0, which brings significant improvements to error handling, help text generation, and overall developer experience. This release maintains full backward compatibility, so you can upgrade without modifying existing code. Better code organization through module separation The large @optique/core/parser module has been refactored into three focused modules that better reflect their purposes. Primitive parsers like option() and argument() now live in @optique/core/primitives, modifier functions such as optional() and withDefault() have moved to @optique/core/modifiers, and combinator functions including object() and or() are now in @optique/core/constructs. // Before: everything from one module import { option, flag, argument, // primitives optional, withDefault, multiple, // modifiers object, or, merge // constructs } from "@optique/core/parser"; // After: organized imports (recommended) import { option, flag, argument } from "@optique/core/primitives"; import { optional, withDefault, multiple } from "@optique/core/modifiers"; import { object, or, merge } from "@optique/core/constructs"; While we recommend importing from these specialized modules for better clarity, all functions continue to be re-exported from the original @optique/core/parser module to ensure your existing code works unchanged. This reorganization makes the codebase more maintainable and helps developers understand the relationships between different parser types. Smarter error handling with automatic conversion One of the most requested features has been better error handling for default value callbacks in withDefault(). Previously, if your callback threw an error—say, when an environment variable wasn't set—that error would bubble up as a runtime exception. Starting with 0.5.0, these errors are automatically caught and converted to parser-level errors, providing consistent error formatting and proper exit codes. // Before (0.4.x): runtime exception that crashes the app const parser = object({ apiUrl: withDefault(option("--url", url()), () => { if (!process.env.API_URL) { throw new Error("API_URL not set"); // Uncaught exception! } return new URL(process.env.API_URL); }) }); // After (0.5.0): graceful parser error const parser = object({ apiUrl: withDefault(option("--url", url()), () => { if (!process.env.API_URL) { throw new Error("API_URL not set"); // Automatically caught and formatted } return new URL(process.env.API_URL); }) }); We've also introduced the WithDefaultError class, which accepts structured messages instead of plain strings. This means you can now throw errors with rich formatting that matches the rest of Optique's error output: import { WithDefaultError, message, envVar } from "@optique/core"; const parser = object({ // Plain error - automatically converted to text databaseUrl: withDefault(option("--db", url()), () => { if (!process.env.DATABASE_URL) { throw new Error("Database URL not configured"); } return new URL(process.env.DATABASE_URL); }), // Rich error with structured message apiToken: withDefault(option("--token", string()), () => { if (!process.env.API_TOKEN) { throw new WithDefaultError( message`Environment variable ${envVar("API_TOKEN")} is required for authentication` ); } return process.env.API_TOKEN; }) }); The new envVar message component ensures environment variables are visually distinct in error messages, appearing bold and underlined in colored output or wrapped in backticks in plain text. More helpful help text with custom default descriptions Default values in help text can sometimes be misleading, especially when they come from environment variables or are computed at runtime. Optique 0.5.0 allows you to customize how default values appear in help output through an optional third parameter to withDefault(). import { withDefault, message, envVar } from "@optique/core"; const parser = object({ // Before: shows actual URL value in help apiUrl: withDefault( option("--api-url", url()), new URL("https://api.example.com") ), // Help shows: --api-url URL [https://api.example.com] // After: shows descriptive text apiUrl: withDefault( option("--api-url", url()), new URL("https://api.example.com"), { message: message`Default API endpoint` } ), // Help shows: --api-url URL [Default API endpoint] }); This is particularly useful for environment variables and computed defaults: const parser = object({ // Environment variable authToken: withDefault( option("--token", string()), () => process.env.AUTH_TOKEN || "anonymous", { message: message`${envVar("AUTH_TOKEN")} or anonymous` } ), // Help shows: --token STRING [AUTH_TOKEN or anonymous] // Computed value workers: withDefault( option("--workers", integer()), () => os.cpus().length, { message: message`Number of CPU cores` } ), // Help shows: --workers INT [Number of CPU cores] // Sensitive information apiKey: withDefault( option("--api-key", string()), () => process.env.SECRET_KEY || "", { message: message`From secure storage` } ), // Help shows: --api-key STRING [From secure storage] }); Instead of displaying the actual default value, you can now show descriptive text that better explains where the value comes from. This is particularly useful for sensitive information like API tokens or for computed defaults like the number of CPU cores. The help system now properly handles ANSI color codes in default value displays, maintaining dim styling even when inner components have their own color formatting. This ensures default values remain visually distinct from the main help text. Comprehensive error message customization We've added a systematic way to customize error messages across all parser types and combinators. Every parser now accepts an errors option that lets you provide context-specific feedback instead of generic error messages. This applies to primitive parsers, value parsers, combinators, and even specialized parsers in companion packages. Primitive parser errors import { option, flag, argument, command } from "@optique/core/primitives"; import { message, optionName, metavar } from "@optique/core/message"; // Option parser with custom errors const serverPort = option("--port", integer(), { errors: { missing: message`Server port is required. Use ${optionName("--port")} to specify.`, invalidValue: (error) => message`Invalid port number: ${error}`, endOfInput: message`${optionName("--port")} requires a ${metavar("PORT")} number.` } }); // Command parser with custom errors const deployCommand = command("deploy", deployParser, { errors: { notMatched: (expected, actual) => message`Unknown command "${actual}". Did you mean "${expected}"?` } }); Value parser errors Error customization can be static messages for consistent errors or dynamic functions that incorporate the problematic input: import { integer, choice, string } from "@optique/core/valueparser"; // Integer with range validation const port = integer({ min: 1024, max: 65535, errors: { invalidInteger: message`Port must be a valid number.`, belowMinimum: (value, min) => message`Port ${String(value)} is reserved. Use ${String(min)} or higher.`, aboveMaximum: (value, max) => message`Port ${String(value)} exceeds maximum. Use ${String(max)} or lower.` } }); // Choice with helpful suggestions const logLevel = choice(["debug", "info", "warn", "error"], { errors: { invalidChoice: (input, choices) => message`"${input}" is not a valid log level. Choose from: ${values(choices)}.` } }); // String with pattern validation const email = string({ pattern: /^[^@]+@[^@]+\.[^@]+$/, errors: { patternMismatch: (input) => message`"${input}" is not a valid email address. Use format: user@example.com` } }); Combinator errors import { or, multiple, object } from "@optique/core/constructs"; // Or combinator with custom no-match error const format = or( flag("--json"), flag("--yaml"), flag("--xml"), { errors: { noMatch: message`Please specify an output format: --json, --yaml, or --xml.`, unexpectedInput: (token) => message`Unknown format option "${token}".` } } ); // Multiple parser with count validation const inputFiles = multiple(argument(string()), { min: 1, max: 5, errors: { tooFew: (count, min) => message`At least ${String(min)} file required, but got ${String(count)}.`, tooMany: (count, max) => message`Maximum ${String(max)} files allowed, but got ${String(count)}.` } }); Package-specific errors Both @optique/run and @optique/temporal packages have been updated with error customization support for their specialized parsers: // @optique/run path parser import { path } from "@optique/run/valueparser"; const configFile = option("--config", path({ mustExist: true, type: "file", extensions: [".json", ".yaml"], errors: { pathNotFound: (input) => message`Configuration file "${input}" not found. Please check the path.`, notAFile: (input) => message`"${input}" is a directory. Please specify a file.`, invalidExtension: (input, extensions, actual) => message`Invalid config format "${actual}". Use ${values(extensions)}.` } })); // @optique/temporal instant parser import { instant, duration } from "@optique/temporal"; const timestamp = option("--time", instant({ errors: { invalidFormat: (input) => message`"${input}" is not a valid timestamp. Use ISO 8601 format: 2024-01-01T12:00:00Z` } })); const timeout = option("--timeout", duration({ errors: { invalidFormat: (input) => message`"${input}" is not a valid duration. Use ISO 8601 format: PT30S (30 seconds), PT5M (5 minutes)` } })); Error customization integrates seamlessly with Optique's structured message format, ensuring consistent styling across all error output. The system helps you provide helpful, actionable feedback that guides users toward correct usage rather than leaving them confused by generic error messages. Looking forward This release focuses on improving the developer experience without breaking existing code. Every new feature is opt-in, and all changes maintain backward compatibility. We believe these improvements make Optique more pleasant to work with, especially when building user-friendly CLI applications that need clear error messages and helpful documentation. We're grateful to the community members who suggested these improvements and helped shape this release through discussions and issue reports. Your feedback continues to drive Optique's evolution toward being a more capable and ergonomic CLI parser for TypeScript. To upgrade to Optique 0.5.0, simply update your dependencies: npm update @optique/core @optique/run # or deno update For detailed migration guidance and API documentation, please refer to the official documentation. While no code changes are required, we encourage you to explore the new error customization options and help text improvements to enhance your CLI applications.