Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Logging in Node.js (or Deno or Bun or edge functions) in 2026

Uncategorized
1 1 2
  • It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.

    We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.

    I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.

    Starting with console methods—and where they fall short

    The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.

    console.debug("Connecting to database...");
    console.info("Server started on port 3000");
    console.warn("Cache miss for user 123");
    console.error("Failed to process payment");
    

    For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:

    No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.

    Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.

    No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").

    No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.

    Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.

    What you actually need from a logging system

    Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.

    Log levels with filtering

    A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.

    Categories

    When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.

    Sinks (multiple output destinations)

    “Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.

    Structured logging

    Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:

    // Instead of this:
    logger.info("User 123 logged in from 192.168.1.1");
    
    // You do this:
    logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
    

    Now you can search for all logs where userId === 123 or filter by IP address.

    Context for request tracing

    In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.

    Getting started with LogTape

    There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.

    So why LogTape? A few reasons stood out to me:

    Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.

    Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”

    Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.

    Let's set it up:

    npm add @logtape/logtape       # npm
    pnpm add @logtape/logtape      # pnpm
    yarn add @logtape/logtape      # Yarn
    deno add jsr:@logtape/logtape  # Deno
    bun add @logtape/logtape       # Bun
    

    Configuration happens once, at your application's entry point:

    import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
    
    await configure({
      sinks: {
        console: getConsoleSink(),  // Where logs go
      },
      loggers: [
        { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },  // What to log
      ],
    });
    
    // Now you can log from anywhere in your app:
    const logger = getLogger(["my-app", "server"]);
    logger.info`Server started on port 3000`;
    logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
    

    Notice a few things:

    1. Configuration is explicit. You decide where logs go (sinks) and which logs to show (lowestLevel).
    2. Categories are hierarchical. The logger ["my-app", "server"] inherits settings from ["my-app"].
    3. Template literals work. You can use backticks for a natural logging syntax.

    Categories and filtering: Controlling log verbosity

    Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.

    Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.

    await configure({
      sinks: {
        console: getConsoleSink(),
      },
      loggers: [
        { category: ["my-app"], lowestLevel: "info", sinks: ["console"] },  // Default: info and above
        { category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] },  // DB module: show debug too
      ],
    });
    

    Now when you log from different parts of your app:

    // In your database module:
    const dbLogger = getLogger(["my-app", "database"]);
    dbLogger.debug`Executing query: ${sql}`;  // This shows up
    
    // In your HTTP module:
    const httpLogger = getLogger(["my-app", "http"]);
    httpLogger.debug`Received request`;  // This is filtered out (below "info")
    httpLogger.info`GET /api/users 200`;  // This shows up
    

    Controlling third-party library logs

    If you're using libraries that also use LogTape, you can control their logs separately:

    await configure({
      sinks: { console: getConsoleSink() },
      loggers: [
        { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
        // Only show warnings and above from some-library
        { category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
      ],
    });
    

    The root logger

    Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:

    await configure({
      sinks: { console: getConsoleSink() },
      loggers: [
        // Catch all logs at info level
        { category: [], lowestLevel: "info", sinks: ["console"] },
        // But show debug for your app
        { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
      ],
    });
    

    Log levels and when to use them

    LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.

    Level When to use it
    trace Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug.
    debug Information useful during development. Variable values, state changes, flow control decisions.
    info Normal operational messages. “Server started,” “User logged in,” “Job completed.”
    warning Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config.
    error Something failed. An operation couldn't complete, but the app is still running.
    fatal The app is about to crash or is in an unrecoverable state.
    const logger = getLogger(["my-app"]);
    
    logger.trace`Entering processUser function`;
    logger.debug`Processing user ${{ userId: 123 }}`;
    logger.info`User successfully created`;
    logger.warn`Rate limit approaching: ${980}/1000 requests`;
    logger.error`Failed to save user: ${error.message}`;
    logger.fatal`Database connection lost, shutting down`;
    

    A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.

    Structured logging: Beyond plain text

    At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”

    If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.

    Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.

    LogTape supports two syntaxes for this:

    Template literals (great for simple messages)

    const userId = 123;
    const action = "login";
    logger.info`User ${userId} performed ${action}`;
    

    Message templates with properties (great for structured data)

    logger.info("User performed action", {
      userId: 123,
      action: "login",
      ip: "192.168.1.1",
      timestamp: new Date().toISOString(),
    });
    

    You can reference properties in your message using placeholders:

    logger.info("User {userId} logged in from {ip}", {
      userId: 123,
      ip: "192.168.1.1",
    });
    // Output: User 123 logged in from 192.168.1.1
    

    Nested property access

    LogTape supports dot notation and array indexing in placeholders:

    logger.info("Order {order.id} placed by {order.customer.name}", {
      order: {
        id: "ORD-001",
        customer: { name: "Alice", email: "alice@example.com" },
      },
    });
    
    logger.info("First item: {items[0].name}", {
      items: [{ name: "Widget", price: 9.99 }],
    });
    

    Machine-readable output with JSON Lines

    For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:

    import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
    
    await configure({
      sinks: {
        console: getConsoleSink({ formatter: jsonLinesFormatter }),
      },
      loggers: [
        { category: [], lowestLevel: "info", sinks: ["console"] },
      ],
    });
    

    Output:

    {"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
    

    Sending logs to different destinations (sinks)

    So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.

    Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.

    This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.

    Console sink

    The simplest sink—outputs to the console:

    import { getConsoleSink } from "@logtape/logtape";
    
    const consoleSink = getConsoleSink();
    

    File sink

    For writing logs to files, install the @logtape/file package:

    npm add @logtape/file
    
    import { getFileSink, getRotatingFileSink } from "@logtape/file";
    
    // Simple file sink
    const fileSink = getFileSink("app.log");
    
    // Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
    const rotatingFileSink = getRotatingFileSink("app.log", {
      maxSize: 10 * 1024 * 1024,  // 10MB
      maxFiles: 5,
    });
    

    Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.

    External services

    For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:

    // OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
    import { getOpenTelemetrySink } from "@logtape/otel";
    
    // Sentry (for error tracking with stack traces and context)
    import { getSentrySink } from "@logtape/sentry";
    
    // AWS CloudWatch Logs (for AWS-native log aggregation)
    import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
    

    The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.

    Multiple sinks

    Here's where things get interesting. You can send different logs to different destinations based on their level or category:

    await configure({
      sinks: {
        console: getConsoleSink(),
        file: getFileSink("app.log"),
        errors: getSentrySink(),
      },
      loggers: [
        { category: [], lowestLevel: "info", sinks: ["console", "file"] },  // Everything to console + file
        { category: [], lowestLevel: "error", sinks: ["errors"] },  // Errors also go to Sentry
      ],
    });
    

    Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.

    Custom sinks

    Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.

    A sink is just a function that takes a LogRecord. That's it:

    import type { Sink } from "@logtape/logtape";
    
    const slackSink: Sink = (record) => {
      // Only send errors and fatals to Slack
      if (record.level === "error" || record.level === "fatal") {
        fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({
            text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
          }),
        });
      }
    };
    

    The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.

    Request tracing with contexts

    Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.

    This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.

    LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.

    Explicit context

    The simplest approach is to create a logger with attached properties using .with():

    function handleRequest(req: Request) {
      const requestId = crypto.randomUUID();
      const logger = getLogger(["my-app", "http"]).with({ requestId });
    
      logger.info`Request received`;  // Includes requestId automatically
      processRequest(req, logger);
      logger.info`Request completed`;  // Also includes requestId
    }
    

    This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?

    Implicit context

    This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).

    First, enable implicit contexts in your configuration:

    import { configure, getConsoleSink } from "@logtape/logtape";
    import { AsyncLocalStorage } from "node:async_hooks";
    
    await configure({
      sinks: { console: getConsoleSink() },
      loggers: [
        { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
      ],
      contextLocalStorage: new AsyncLocalStorage(),
    });
    

    Then use withContext() in your request handler:

    import { withContext, getLogger } from "@logtape/logtape";
    
    function handleRequest(req: Request) {
      const requestId = crypto.randomUUID();
    
      return withContext({ requestId }, async () => {
        // Every log message in this callback includes requestId—automatically
        const logger = getLogger(["my-app"]);
        logger.info`Processing request`;
    
        await validateInput(req);   // Logs here include requestId
        await processBusinessLogic(req);  // Logs here too
        await saveToDatabase(req);  // And here
    
        logger.info`Request complete`;
      });
    }
    

    The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.

    This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.

    Framework integrations

    Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:

    // Express
    import { expressLogger } from "@logtape/express";
    app.use(expressLogger());
    
    // Fastify
    import { getLogTapeFastifyLogger } from "@logtape/fastify";
    const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
    
    // Hono
    import { honoLogger } from "@logtape/hono";
    app.use(honoLogger());
    
    // Koa
    import { koaLogger } from "@logtape/koa";
    app.use(koaLogger());
    

    These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.

    Using LogTape in libraries vs applications

    If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?

    LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.

    If you're writing a library

    The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.

    // my-library/src/database.ts
    import { getLogger } from "@logtape/logtape";
    
    const logger = getLogger(["my-library", "database"]);
    
    export function connect(url: string) {
      logger.debug`Connecting to ${url}`;
      // ... connection logic ...
      logger.info`Connected successfully`;
    }
    

    What happens when someone uses your library?

    If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.

    If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.

    This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.

    If you're writing an application

    You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:

    await configure({
      sinks: { console: getConsoleSink() },
      loggers: [
        { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },  // Your app: verbose
        { category: ["my-library"], lowestLevel: "warning", sinks: ["console"] },  // Library: quiet
        { category: ["noisy-library"], lowestLevel: "fatal", sinks: [] },  // That one library: silent
      ],
    });
    

    This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.

    Migrating from another logger?

    If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:

    import { install } from "@logtape/adaptor-winston";
    import winston from "winston";
    
    install(winston.createLogger({ /* your existing config */ }));
    

    This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.

    Production considerations

    Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.

    Non-blocking mode

    By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.

    Non-blocking mode buffers log messages and writes them in the background:

    const consoleSink = getConsoleSink({ nonBlocking: true });
    const fileSink = getFileSink("app.log", { nonBlocking: true });
    

    The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.

    Sensitive data redaction

    Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.

    LogTape's @logtape/redaction package helps you catch these before they become a problem:

    import {
      redactByPattern,
      EMAIL_ADDRESS_PATTERN,
      CREDIT_CARD_NUMBER_PATTERN,
      type RedactionPattern,
    } from "@logtape/redaction";
    import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
    
    const BEARER_TOKEN_PATTERN: RedactionPattern = {
      pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
      replacement: "[REDACTED]",
    };
    
    const formatter = redactByPattern(defaultConsoleFormatter, [
      EMAIL_ADDRESS_PATTERN,
      CREDIT_CARD_NUMBER_PATTERN,
      BEARER_TOKEN_PATTERN,
    ]);
    
    await configure({
      sinks: {
        console: getConsoleSink({ formatter }),
      },
      // ...
    });
    

    With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.

    See the redaction documentation for more patterns and field-based redaction.

    Edge functions and serverless

    Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.

    The solution is to explicitly flush logs before returning:

    import { configure, dispose } from "@logtape/logtape";
    
    export default {
      async fetch(request, env, ctx) {
        await configure({ /* ... */ });
        
        // ... handle request ...
        
        ctx.waitUntil(dispose());  // Flush logs before worker terminates
        
        return new Response("OK");
      },
    };
    

    The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.

    Wrapping up

    Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.

    LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.

    If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.

    Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.

  • hongminhee@hollo.socialundefined hongminhee@hollo.social shared this topic

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    1 Views
    Fedify 1.10.0: Observability foundations for the future debug dashboard Fedify is a #TypeScript framework for building #ActivityPub servers that participate in the #fediverse. It reduces the complexity and boilerplate typically required for ActivityPub implementation while providing comprehensive federation capabilities. We're excited to announce #Fedify 1.10.0, a focused release that lays critical groundwork for future debugging and observability features. Released on December 24, 2025, this version introduces infrastructure improvements that will enable the upcoming debug dashboard while maintaining full backward compatibility with existing Fedify applications. This release represents a transitional step toward Fedify 2.0.0, introducing optional capabilities that will become standard in the next major version. The changes focus on enabling richer observability through OpenTelemetry enhancements and adding prefix scanning capabilities to the key–value store interface. Enhanced OpenTelemetry instrumentation Fedify 1.10.0 significantly expands OpenTelemetry instrumentation with span events that capture detailed ActivityPub data. These enhancements enable richer observability and debugging capabilities without relying solely on span attributes, which are limited to primitive values. The new span events provide complete activity payloads and verification status, making it possible to build comprehensive debugging tools that show the full context of federation operations: activitypub.activity.received event on activitypub.inbox span — records the full activity JSON, verification status (activity verified, HTTP signatures verified, Linked Data signatures verified), and actor information activitypub.activity.sent event on activitypub.send_activity span — records the full activity JSON and target inbox URL activitypub.object.fetched event on activitypub.lookup_object span — records the fetched object's type and complete JSON-LD representation Additionally, Fedify now instruments previously uncovered operations: activitypub.fetch_document span for document loader operations, tracking URL fetching, HTTP redirects, and final document URLs activitypub.verify_key_ownership span for cryptographic key ownership verification, recording actor ID, key ID, verification result, and the verification method used These instrumentation improvements emerged from work on issue #234 (Real-time ActivityPub debug dashboard). Rather than introducing a custom observer interface as originally proposed in #323, we leveraged Fedify's existing OpenTelemetry infrastructure to capture rich federation data through span events. This approach provides a standards-based foundation that's composable with existing observability tools like Jaeger, Zipkin, and Grafana Tempo. Distributed trace storage with FedifySpanExporter Building on the enhanced instrumentation, Fedify 1.10.0 introduces FedifySpanExporter, a new OpenTelemetry SpanExporter that persists ActivityPub activity traces to a KvStore. This enables distributed tracing support across multiple nodes in a Fedify deployment, which is essential for building debug dashboards that can show complete request flows across web servers and background workers. The new @fedify/fedify/otel module provides the following types and interfaces: import { MemoryKvStore } from "@fedify/fedify"; import { FedifySpanExporter } from "@fedify/fedify/otel"; import { BasicTracerProvider, SimpleSpanProcessor, } from "@opentelemetry/sdk-trace-base"; const kv = new MemoryKvStore(); const exporter = new FedifySpanExporter(kv, { ttl: Temporal.Duration.from({ hours: 1 }), }); const provider = new BasicTracerProvider(); provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); The stored traces can be queried for display in debugging interfaces: // Get all activities for a specific trace const activities = await exporter.getActivitiesByTraceId(traceId); // Get recent traces with summary information const recentTraces = await exporter.getRecentTraces({ limit: 100 }); The exporter supports two storage strategies depending on the KvStore capabilities. When the list() method is available (preferred), it stores individual records with keys like [prefix, traceId, spanId]. When only cas() is available, it uses compare-and-swap operations to append records to arrays stored per trace. This infrastructure provides the foundation for implementing a comprehensive debug dashboard as a custom SpanExporter, as outlined in the updated implementation plan for issue #234. Optional list() method for KvStore interface Fedify 1.10.0 adds an optional list() method to the KvStore interface for enumerating entries by key prefix. This method enables efficient prefix scanning, which is useful for implementing features like distributed trace storage, cache invalidation by prefix, and listing related entries. interface KvStore { // ... existing methods list?(prefix?: KvKey): AsyncIterable<KvStoreListEntry>; } When the prefix parameter is omitted or empty, list() returns all entries in the store. This is useful for debugging and administrative purposes. All official KvStore implementations have been updated to support this method: MemoryKvStore — filters in-memory keys by prefix SqliteKvStore — uses LIKE query with JSON key pattern PostgresKvStore — uses array slice comparison RedisKvStore — uses SCAN with pattern matching and key deserialization DenoKvStore — delegates to Deno KV's built-in list() API WorkersKvStore — uses Cloudflare Workers KV list() with JSON key prefix pattern While list() is currently optional to give existing custom KvStore implementations time to add support, it will become a required method in Fedify 2.0.0 (tracked in issue #499). This migration path allows implementers to gradually adopt the new capability throughout the 1.x release cycle. The addition of list() support was implemented in pull request #500, which also included the setup of proper testing infrastructure for WorkersKvStore using Vitest with @cloudflare/vitest-pool-workers. NestJS 11 and Express 5 support Thanks to a contribution from Cho Hasang (@crohasang@hackers.pub), the @fedify/nestjs package now supports NestJS 11 environments that use Express 5. The peer dependency range for Express has been widened to ^4.0.0 || ^5.0.0, eliminating peer dependency conflicts in modern NestJS projects while maintaining backward compatibility with Express 4. This change, implemented in pull request #493, keeps the workspace catalog pinned to Express 4 for internal development and test stability while allowing Express 5 in consuming applications. What's next Fedify 1.10.0 serves as a stepping stone toward the upcoming 2.0.0 release. The optional list() method introduced in this version will become required in 2.0.0, simplifying the interface contract and allowing Fedify internals to rely on prefix scanning being universally available. The enhanced #OpenTelemetry instrumentation and FedifySpanExporter provide the foundation for implementing the debug dashboard proposed in issue #234. The next steps include building the web dashboard UI with real-time activity lists, filtering, and JSON inspection capabilities—all as a separate package that leverages the standards-based observability infrastructure introduced in this release. Depending on the development timeline and feature priorities, there may be additional 1.x releases before the 2.0.0 migration. For developers building custom KvStore implementations, now is the time to add list() support to prepare for the eventual 2.0.0 upgrade. The implementation patterns used in the official backends provide clear guidance for various storage strategies. Acknowledgments Special thanks to Cho Hasang (@crohasang@hackers.pub) for the NestJS 11 compatibility improvements, and to all community members who provided feedback and testing for the new observability features. For the complete list of changes, bug fixes, and improvements, please refer to the CHANGES.md file in the repository. #fedidev #release
  • 0 Votes
    1 Posts
    3 Views
    When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging. Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive. I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine. Several readers wanted to see a real-world example rather than theory. The problem: existing loggers assume you're building an app Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like: Did the HTTP request actually go out? Was the signature generated correctly? Did the remote server reject it? Why? Was there a problem parsing the response? These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork. But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues. I looked at the existing options. With winston or Pino, I would have to either: Configure a logger inside Fedify (imposing my choices on users), or Ask users to pass a logger instance to Fedify (adding boilerplate) There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons. None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user. The solution: hierarchical categories with zero default output The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it. Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized: Category What it logs ["fedify"] Everything from the library ["fedify", "federation", "inbox"] Incoming activities ["fedify", "federation", "outbox"] Outgoing activities ["fedify", "federation", "http"] HTTP requests and responses ["fedify", "sig", "http"] HTTP Signature operations ["fedify", "sig", "ld"] Linked Data Signature operations ["fedify", "sig", "key"] Key generation and retrieval ["fedify", "runtime", "docloader"] JSON-LD document loading ["fedify", "webfinger", "lookup"] WebFinger resource lookups …and about a dozen more. Each category corresponds to a distinct subsystem. This means a user can configure logging like this: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Show errors from all of Fedify { category: "fedify", sinks: ["console"], lowestLevel: "error" }, // But show debug info for inbox processing specifically { category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" }, ], }); When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration. Request tracing with implicit contexts The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries. In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request. Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId: await configure({ sinks: { file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter }) }, loggers: [ { category: "fedify", sinks: ["file"], lowestLevel: "info" }, ], contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts }); With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs: jq 'select(.properties.requestId == "abc-123")' fedify.jsonl And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed. The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure. What users actually see So what does all this configuration actually mean for someone using Fedify? If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops. For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem. And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in. The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need. Lessons learned Building Fedify with LogTape taught me a few things: Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently. Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically. Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant. Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in. Try it yourself If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it. The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that. LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
  • 0 Votes
    1 Posts
    7 Views
    CVSS 10.0 critical severity vulnerablility affecting server-side use of React.js, tracked as: CVE-2025-55182 in React.js CVE-2025-66478 specifically for the Next.js framework https://react2shell.com #infosec #javascript #react
  • 0 Votes
    1 Posts
    8 Views
    Cose da vedere nel tempo aziendale: vecchi sistemi operativi emulati in #JavaScript https://www.pcjs.org/